Hi, is there an option to add an LLM's parameters into the trace? We are currently using bedrock to query, and are running into some unexpected issues so we wanted to test out different chunking strategies, temperatures, etc. Is it possible to add those fields as columns of our traces so we can run evaluations on them?
Yes, you can add LLM parameters to your traces in Phoenix. This can be done by customizing the trace data to include additional metadata such as chunking strategies, temperatures, and other parameters. According to Customize Traces|Phoenix documentation, you can modify the spans DataFrame to include these parameters as additional columns before logging the traces back to the server.Would you like more detail on how to implement this? Sources: - https://docs.arize.com/phoenix/~~/changes/v6Zhm276x8LlKmwqElIA/quickstart/llm-traces#3b1b2852c9304dd89e3c3e1658a4818a - https://docs.arize.com/phoenix/tracing/llm-traces#P%3A1
馃挕聽Hint: Mention RunLLM in the thread for followups.
Hi Nouamane B. please let us know if the answer above suffices, if you need more support we can help you add attributes to specific spans depending on how you鈥檙e instrumenting
Yes if it's not too much trouble, could you explain how adding temperature to a span would work?
I am instrumenting to the phoenix cloud instance
one way is you can use our convenience context managers (using_metadata) to wrap your call to the LLM and example can be found directly here: https://github.com/Arize-ai/openinference/blob/main/python/openinference-instrumentation/src/openinference/instrumentation/context_attributes.py#L122
this won't use our invocation parameters semantic convention, but it will be able to dynamically adjust metadata on your span as you change different v alues
Is there a way to do this with the HaystackInstrumentor?
We have a generator class which is just abstraction of a small chat pipeline
from openinference.instrumentation.haystack import HaystackInstrumentor
from openinference.instrumentation import using_attributes
temperature = 0.9
print(f'registering tracing with initial temprature of {temperature}')
tracer_provider = register(
project_name="default",
endpoint="https://app.phoenix.arize.com/v1/traces"
)
HaystackInstrumentor().instrument(
tracer_provider=tracer_provider
)
def main():
print('setting up generator')
prompt = 'What is the capital of Peru'
generator = GeneratorService()
# generator.generate(prompt=prompt)
print('sending to generate with pipeline')
print(generator.generate_with_pipeline(prompt=prompt, temperature=temperature))
main()
Or would I have to build it out custom?
If the haystack instrumentor doesn鈥檛 natively instrument the temperature you need to use the context manager to wrap the call to haystack itself in your code
You can use the with_metadata context manager around your call to generator.generate and it should show up on your spans
Ah I understand
I will try this thanks Dustin N.
Good luck! Please let us know how it goes
