Can I trace Google Gen AI with the VertexAI operator? An example of my code
import os
import google.generativeai as genai
genai.configure(api_key=os.environ["GEMINI_API_KEY"])
# Create the model
generation_config = {
"temperature": 1,
"top_p": 0.95,
"top_k": 40,
"max_output_tokens": 8192,
"response_mime_type": "text/plain",
}
model = genai.GenerativeModel(
model_name="gemini-1.5-pro",
generation_config=generation_config,
system_instruction="You're a bot which generates only SSML output based on the question. Include pauses and make it a conversation between two people in a podcast. Use US Voices and add hmmm and umm and make the voices talk over each other. Donot include intro and outro music.",
)
chat_session = model.start_chat(
history=[])
response = chat_session.send_message("INSERT_INPUT_HERE")
print(response.text)hi Harikrishna D. thank you for your interest in Phoenix and OpenInference! Nominally I believe our vertexai autoinstrumentor only instruments the vertexai and google-cloud-aiplatform SDKs. Roger Y. will know more if our existing autoinstrumentor might do anything for the generativeai SDK at all since Google shares a lot of core modules
Thank you
Given this has been merged, https://github.com/open-telemetry/opentelemetry-python-contrib/tree/a5474c3b290f5f[…]nstrumentation-genai/opentelemetry-instrumentation-google-genai Do we still need to wait for phoenix to add support?
It will definitely work with phoenix with the slight caviat that conventions have moved a bit. We will have first-class support (in flight right now) but we could also figure out a translation layer. Let me keep you in the loop. Gemini and googlegenai is our top priority ticket right now since vertex seems to be largely legacy at this point.
