Hi Folks for Hallucination and Ref evaluator should we use eval_model = OpenAIModel( model="gpt-4-turbo-preview", ) Because I have AzureChatOpenAI key and If I use it for evals its throwing 'AzureChatOpenAI' object has no attribute 'default_concurrency' I tried patching it eval_model.default_concurrency = 1 it didnt work any workaround for this?
It seems likely that the issue arises because the AzureChatOpenAI object is not directly supported by Phoenix's evaluation framework. Instead, you should use the OpenAIModel class with Azure-specific parameters. According to Phoenix documentation, you can configure the OpenAIModel for Azure by specifying the azure_endpoint and api_version parameters.Here's an example of how you might set it up: LBKSfrom phoenix.evals import OpenAIModelLBKSeval_model = OpenAIModel( model="gpt-4-turbo-preview", azure_endpoint="https://your-endpoint.azure.com/", api_version="2023-09-15-preview", api_key="your-azure-api-key" )LBKSThis setup should allow you to use the OpenAIModel with Azure OpenAI for evaluations without encountering the default_concurrency attribute error. Would you like more details on setting up the model or running evaluations?Sources:
- Phoenix documentation
💡 Hint: Mention RunLLM in the thread for followups.
The error message indicates that the specified API deployment for your Azure OpenAI resource could not be found. This could be due to a few reasons: 1. Incorrect Deployment Name: Ensure that the deployment name specified in your OpenAIModel configuration matches exactly with the deployment name in your Azure OpenAI resource. According to Phoenix documentation, the model parameter should be set to the deployment name.2. Recent Deployment: If you have just created the deployment, it might take a few minutes for it to become active. The error message suggests waiting a few minutes and trying again.Would you like more details on how to verify your deployment settings or troubleshoot further?Sources:
- Phoenix documentation
💡 Hint: Mention RunLLM in the thread for followups.
The configuration you provided for the OpenAIModel seems mostly correct, but ensure that the model parameter is set to the correct deployment name as configured in your Azure OpenAI resource. The azure_endpoint should be the full URL of your Azure OpenAI endpoint, and the api_version should match the version supported by your deployment.Here's a checklist to ensure everything is set up correctly: 1. Model Name: Ensure that the model parameter matches the deployment name in Azure. This is crucial as it identifies the specific model deployment you want to use.2. Azure Endpoint: The azure_endpoint should be the full URL provided by Azure for your OpenAI resource. It typically looks like https://<your-resource-name> .openai.azure.com/.3. API Version: Verify that the api_version is correct and supported by your Azure deployment. You can check the available versions in the Azure portal or documentation.4. API Key: Ensure that the api_key is correct and has the necessary permissions to access the Azure OpenAI resource.If you continue to face issues, double-check these parameters in the Azure portal to ensure they match your configuration. Would you like more detailed guidance on any of these steps?Sources:
- Phoenix documentation
💡 Hint: Mention RunLLM in the thread for followups.
Yes, you can trace the request and response of your API hosted on Azure using Phoenix. To do this, you need to manually instrument your application using OpenTelemetry, which Phoenix supports. Here's a general approach: 1. Set Up OpenTelemetry: Install the necessary OpenTelemetry packages in your application. This includes the OpenTelemetry SDK and an exporter to send traces to Phoenix.2. Configure Tracing: Set up a tracer provider in your application to capture and export traces. You can use the OTLPSpanExporter to send traces to a Phoenix instance.3. Instrument Your API: Use the OpenTelemetry API to create spans for your API requests and responses. This involves wrapping your API logic with spans to capture relevant data.4. Send Traces to Phoenix: Ensure your traces are sent to a Phoenix instance running locally or in the cloud. You can configure the endpoint in the OTLPSpanExporter.Would you like more detailed guidance on setting up OpenTelemetry for your specific use case?Sources:
- https://github.com/Arize-ai/openinference/blob/main/python/instrumentation/openinference-instrumentation-openai/README.md
- https://github.com/Arize-ai/openinference/blob/main/python/instrumentation/openinference-instrumentation-haystack/README.md
💡 Hint: Mention RunLLM in the thread for followups.
To trace your API using Phoenix with the endpoint URL you have, follow these steps: 1. Install OpenTelemetry and Phoenix Packages: Ensure you have the necessary packages installed in your application environment.
```LBKSLBKS2. **Configure the Tracer**: Set up the OpenTelemetry tracer to send traces to your Phoenix endpoint.LBKS
```LBKS from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import SimpleSpanProcessorLBKS # Set your Phoenix endpoint
endpoint = "http://your-phoenix-endpoint/v1/traces"LBKS # Configure the tracer provider
tracer_provider = TracerProvider()
span_exporter = OTLPSpanExporter(endpoint=endpoint)
span_processor = SimpleSpanProcessor(span_exporter)
tracer_provider.add_span_processor(span_processor)
trace.set_tracer_provider(tracer_provider)LBKS # Get a tracer
tracer = trace.get_tracer(__name__)
```LBKSLBKS3. **Instrument Your API**: Use the tracer to create spans around your API logic.LBKS
```LBKS from flask import Flask, requestLBKS app = Flask(__name__)LBKS @app.route('/your-api-endpoint', methods=['POST'])
def your_api_function():
with tracer.start_as_current_span("api_request") as span:
# Add request details to the span
span.set_attribute("http.method", request.method)
span.set_attribute("http.url", request.url)LBKS # Your API logic here
response = "Your response"LBKS # Add response details to the span
span.set_attribute("http.status_code", 200)
return responseLBKS if __name__ == "__main__":
app.run(host='0.0.0.0', port=5000)
```LBKSLBKS4. **Run Your Application**: Start your application and ensure it sends traces to the Phoenix endpoint.LBKSThis setup will allow you to trace requests and responses of your API using Phoenix. Would you like more details on any specific part of this setup?LBKSSources:
- [Phoenix Documentation](https://github.com/Arize-ai/openinference/blob/main/python/instrumentation/openinference-instrumentation-haystack/README.md)
💡 Hint: Mention RunLLM in the thread for followups.
RunLLM this is sample request results = requests.post(url="https://chatbot.com",data=json.dumps(input_data)) results
To trace the request and response of your API call using Phoenix, you can manually instrument the code with OpenTelemetry. Here's how you can do it: 1. Set Up OpenTelemetry: Ensure you have the necessary OpenTelemetry packages installed.
```LBKSLBKS2. **Configure the Tracer**: Set up the OpenTelemetry tracer to send traces to your Phoenix endpoint.LBKS
```LBKS from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import SimpleSpanProcessorLBKS # Set your Phoenix endpoint
endpoint = "http://your-phoenix-endpoint/v1/traces"LBKS # Configure the tracer provider
tracer_provider = TracerProvider()
span_exporter = OTLPSpanExporter(endpoint=endpoint)
span_processor = SimpleSpanProcessor(span_exporter)
tracer_provider.add_span_processor(span_processor)
trace.set_tracer_provider(tracer_provider)LBKS # Get a tracer
tracer = trace.get_tracer(__name__)
```LBKSLBKS3. **Instrument the API Call**: Use the tracer to create spans around your API call.LBKS
```LBKS import requests
import jsonLBKS url = "https://chatbot.com"
input_data = {"key": "value"}LBKS with tracer.start_as_current_span("api_request") as span:
# Add request details to the span
span.set_attribute("http.method", "POST")
span.set_attribute("http.url", url)LBKS # Make the API call
response = requests.post(url, data=json.dumps(input_data))LBKS # Add response details to the span
span.set_attribute("http.status_code", response.status_code)
span.set_attribute("response.body", response.text)LBKS print(response.text)
```LBKSLBKSThis setup will allow you to trace the request and response of your API call using Phoenix. Would you like more details on any specific part of this setup?LBKSSources:
- [https://github.com/Arize-ai/openinference/blob/main/python/examples/dspy-rag-fastapi/frontend/pages/3_optimize_pipeline.py](https://github.com/Arize-ai/openinference/blob/main/python/examples/dspy-rag-fastapi/frontend/pages/3_optimize_pipeline.py)
- [https://github.com/Arize-ai/phoenix/blob/main/examples/manually-instrumented-chatbot/frontend/src/app.py](https://github.com/Arize-ai/phoenix/blob/main/examples/manually-instrumented-chatbot/frontend/src/app.py)
💡 Hint: Mention RunLLM in the thread for followups.
