Hi everyone! I am new here. I am interested in using the Phoenix Playground to allow a customer test our system directly on the platform without using our final product. The idea is to help him refine the prompts and see the traces in a convenient way, as Phoenix is already doing great. What is the problem? Our product is built using Azure OpenAI and Search Index. I do not have an endpoint ready for this. I am not sure how to configure it or if there is a way to put my code in Python on Phoenix. We are just using the chat completion function that includes inside the data for the index. I see I can configure in settings the AI provider, but this is just calling the LLM directly without passing any reference. Not sure if I am explaining myself well. Thnaks in advance.
Juan C. The playground can be great to replay specific examples:
Retrieval is done, text is returned
Based on retrieved context call an LLM
Are you using a framework or all hand coded? Phoenix doesn't run your application itself but If you trace your application you can replay specific retrieval events based on your returned data. We don't replay the search itself, but can replay the data post search and allow you to vary the template. The key is that you use prompt variables and template together. The variables will be the data that is returned from the search index. This allows you to edit your prompt template and replay the response from the retrieval based on the same data.
Thanks Jason for your fast response! I am using directly the Azure OpenAI SDK with their Chat Completion method where you can include the information of the Search Index. I already configured my endpoint for Azure on Phoenix, but if I want to add a source the only way I guess is on the dataset view, no? It would be great having a way to point the retriever, in this case, including in the code the index as a parameter since this is already included in the chat completion method.
completion = client.chat.completions.create( model=deployment, messages=messages, max_tokens=2500, temperature=0, top_p=1, frequency_penalty=0, presence_penalty=0, stop=None, stream=False, extra_body={ "data_sources": [ { "type": "azure_search", "parameters": { "endpoint": f"{search_endpoint}", "index_name": search_index, "semantic_configuration": "semantic", "query_type": "semantic", "fields_mapping": { "content_fields_separator": "\n", "content_fields": ["content"], "filepath_field": "metadata_storage_path", "title_field": None, "url_field": None, "vector_fields": [], }, "in_scope": True, "role_information": "XXXX", "filter": "conversationId eq 'XXXX'", "strictness": 1, "top_n_documents": 10, "authentication": {"type": "api_key", "key": f"{search_key}"}, }, } ] }, )
The Phoenix team is in Phoenix Support, please feel free to post in that channel.
Thanks a lot Xander S.! What an amazing and fast reaction from Arize! One interesting goal to explore would be enabling business teams to take on a larger part of the prompt-engineering work, since they have the know-how and clearly understand what they want during the development phase. Enhancing the Playground with the ability to include and monitor the data source would greatly accelerate the process of delivering a bot.
