Can we create a new column to appear in Phoenix traces?
Yes, you can add custom columns to Phoenix traces by including additional metadata or attributes when instrumenting your spans. For example, you can add a "user" or any other custom field as a span attribute or metadata, and it will appear as a column in the Phoenix UI if properly set during span creation or via supported instrumentation libraries. See customize spans with metadata and manual instrumentation guide.However, you cannot update existing spans with new columns after they have been logged; you must set the custom attribute at span creation time. If you need to add metadata after the fact, you must clear the existing spans and re-upload them with the new column, or change the span_id before re-uploading. See discussion on updating spans.Would you like more detail or a code example?Sources: - Manual Instrumentation Guide - Slack: Updating Spans with Metadata
馃挕聽Hint: Mention RunLLM in the thread for followups.
I am not clear on how to add a new column..
is there a specific example?
I see this example.. Add Attributes, Metadata, Users | Phoenix But it is limited to session, user, metadata...How can I make custom column?
I want to set the column value based on response from llm
This is great..thank you..One more question..if I want to access metadata to insert key value in there in the current_span..how do I do that?
Would it be like this?
from opentelemetry import trace
current_span = trace.get_current_span()
current_span.set_attribute("metadata", {'key': 'value'})Hey..does this work with auto instrumentor while using Phoenix OTEL?
I am using Langchain precisely so that means I should not use auto instrumentor for now...but, it creates another problem for me...I am using base OTEL and set_input and set_output and set_attirbutes.... But, I have an image in my input...with auto-instrumentor, image was being saved in the traces; but with base OTEL instrumentation, my input image does not get captured in the traces..How do I make that work? I need to have input image in my traces too..
my code looks something like this:
image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
model = ChatOpenAI(model="gpt-4o-mini")
image_data = base64.b64encode(httpx.get(image_url).content).decode("utf-8")
message = HumanMessage(
content=[
{"type": "text", "text": "describe the weather in this image"},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{image_data}", "detail": "low"},
},
],
)
if __name__ == "__main__":
response = model.invoke([message])
print(response.content)
I can use auto-instrumentor with using_metadata tag but then I cannot do set_input and set_output...Can I? and also how do I update medadata to after llm returns the call?
I tried this code above that you provided....I can see base64 image string in input; but image is not visible in the trace like it does in auto-instrumentor case...it makes it inconvenient because without seeing the image in the trace, it is difficult to say if LLM did good or not..
I am able to do this all with combination of manual and auto instrumentor for now
Since I am combining auto and manual, I see this chain upon my llm call..I hope it is not consuming llm tokens two times for me...
