Hi, for my chatbot application phoenix projects are created for the thread_id's. Once I create a new thread, phoenix successfully detect it as a project and provides the trace link. But again if I create a new thread, it doesn't detect that as a project. But if I terminate the session, and run the server again, phoenix will detect it as a project and provide the trace_link. Why is that and how to solve the issue ?
Phoenix does not support dynamically changing the project after it has been set, except in a notebook environment where the using_project context manager can be used (source). This could be why new threads are not detected as projects without restarting the server.To solve this, you can maintain a list of project names and add the thread_id to this list as soon as the thread is created. This way, you can check if the thread_id exists in the list and avoid creating a new project each time. Here’s how you can modify your code to achieve this:
from typing import Optional
from httpx import Client
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
import phoenix as pxLBKS# Set up tracing
tracer_provider = TracerProvider()
trace.set_tracer_provider(tracer_provider)
span_exporter = OTLPSpanExporter(endpoint="[http://localhost:6006/v1/traces](http://localhost:6006/v1/traces)")
span_processor = SimpleSpanProcessor(span_exporter)
tracer_provider.add_span_processor(span_processor)
tracer = trace.get_tracer(__name__)LBKS# Launch Phoenix app
session = px.launch_app(use_temp_dir=False)
tracers = {}LBKS# GraphQL query to get project ID
client = Client(base_url="[http://localhost:6006](http://localhost:6006)")
query = """
query ($after: String = null) {
projects(after: $after) {
edges {
project: node {
id
name
}
}
pageInfo {
hasNextPage
endCursor
}
}
}
"""LBKSdef get_id_for_project_name(name: str) -> Optional[str]:
has_next_page = True
after = None
while has_next_page:
response = [client.post](http://client.post)("/graphql", json={"query": query, "variables": {"after": after}})
if response.status_code != 200:
raise Exception(f"Failed with status code: {response.status_code}")
data = response.json()["data"]
edges = data["projects"]["edges"]
projects = map(lambda edge: edge["project"], edges)
for project in projects:
if project["name"] == name:
return project["id"]
page_info = data["projects"]["pageInfo"]
has_next_page = page_info["hasNextPage"]
after = page_info["endCursor"]
return NoneLBKSdef get_current_project_name() -> str:
project_name = os.getenv("CURRENT_PROJECT_NAME")
if not project_name:
raise Exception("Current project name is not set")
return project_nameLBKS# Set the current project name as an environment variable
os.environ["CURRENT_PROJECT_NAME"] = "your_current_project_name"LBKS# Get the current project name
current_project_name = get_current_project_name()LBKS# Get project ID dynamically for the current project
current_project_id = get_id_for_project_name(current_project_name)
if current_project_id is None:
raise Exception(f"Project ID for '{current_project_name}' not found")LBKS# List to store project names
project_names = []LBKS# Get the current span and trace ID
current_span = trace.get_current_span()
trace_id = current_span.get_span_context().trace_id
trace_id_hex = format(trace_id, '032x')LBKS# Construct the trace link
trace_link = f"<http://localhost:{os.getenv>('PORT', '6006')}/projects/{current_project_id}/traces/{trace_id_hex}"LBKSprint(f"Trace Link: {trace_link}")LBKS# Your existing code
@extend_schema(
tags=['Main'],
operation_id='query',
summary='Chained Query',
description='API for Changed Queries',
parameters=[LBKS```
OpenApiParameter(name='app_id', description='App ID', required=True, type=OpenApiTypes.UUID,
location=OpenApiParameter.PATH),
OpenApiParameter(name='thread_id', description='Thread ID', required=True, type=OpenApiTypes.UUID,
location=OpenApiParameter.PATH),
],
responses={200: OpenApiResponse(description='Success')}
)
@api_view(['POST'])
def query(request, app_id, thread_id):
if thread_id not in tracers.keys():
tracer = tracing.setup_tracing(thread_id)
tracers[thread_id] = tracer
else:
tracer = tracers[thread_id]LBKS [logging.info](http://logging.info)("New conversation started for app id : %s holding the thread id : %s", app_id, thread_id)LBKS if request.method == 'POST':
payload = request.data
query = payload['query']
type = payload['type']
image_file_path = ""LBKS language = True
if 'language' in payload:
language_str = payload['language']
if language_str == 'true':
language = True
elif language_str == 'false':
language = FalseLBKS voice = False
if 'voice' in payload:
voice = eval(payload['voice'].capitalize())LBKS memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True,
input_key="question", output_key="answer")LBKS # Check if thread_id is in project_names list
if thread_id not in project_names:
project_names.append(thread_id)LBKS with tracer.start_as_current_span(name="completions") as span:
try:
span.set_attribute("input.value", f"Query : {query}, Language : {str(language)}")
if type == 'Agent':
span.set_attribute("openinference.span.kind", "AGENT")
answer, current_memory = conversational_agent.get_result(
query, language, voice, memory, app_id, thread_id, tracer)
elif type == 'Chain':
span.set_attribute("openinference.span.kind", "CHAIN")
answer, current_memory = conversational_chain.get_result(
query, language, voice, image_file_path, memory, app_id, thread_id, tracer)
span.set_status(Status(StatusCode.OK))
span.set_attribute("output.value", answer["answer"])
return Response(answer, status=status.HTTP_200_OK)
except Exception as e:
[logging.info](http://logging.info)("Exception occurred while trying to call get result method : %s", e)
answer = {
"answer": "Query overload in LLM. Please try again shortly",
"type": "doc",
"documents": ['']
}
span.set_status(Status(StatusCode.ERROR))
return Response(answer, status=status.HTTP_200_OK)LBKS```
Would you like more details on any specific part?Sources: - GitHub Issue #3207 - Slack Conversation
💡 Hint: Mention RunLLM in the thread for followups.
Thank You !!!
Hi Mikyo, its like this. I am creating an app. for that app I can create multiple threads. I am including traces for each thread. so for every thread I create, it shows as a project in the phoenix dashboard. When I run the server and create a new thread it works properly, but if I create another new thread it won't be detected as a project. If I restart the server it will be detected and give the response and the trace link.
So the project names are dynamic? Can you give a bit of an example of what the project names look like? Are you using graphql to resolve the URL?
Alternatively you could track threads as metadata - does that not work here?
a6190298-889e-4e9b-a8f8-b01a9a676849 - this is my app Id dc0c6059-2029-4f7d-a627-1bee0976cffa - this is my thread Id, this thread Id is shown as the project name in phoenix.
def setup_tracing(app_name):
resource = Resource(attributes={
ResourceAttributes.PROJECT_NAME: f'{app_name}'
})
tracer_provider = TracerProvider(resource=resource)
trace.set_tracer_provider(tracer_provider)
tracer = trace.get_tracer(__name__)
collector_endpoint = f"http://localhost:{get_env_port()}/v1/traces"
span_exporter = OTLPSpanExporter(endpoint=collector_endpoint)
simple_span_processor = SimpleSpanProcessor(span_exporter=span_exporter)
trace.get_tracer_provider().add_span_processor(simple_span_processor)
return tracer
Is the separation by project per thread needed for access control?
Sorry I didn't get it !
I guess the question is you could setup instrumentation once and then have each thread use different metadata (https://docs.arize.com/phoenix/tracing/how-to-tracing/customize-spans#using_metadata) so you track each thread under a single application project. Mainly trying to understand what dc0c6059-2029-4f7d-a627-1bee0976cffa signifies!
And I guess you want to print a URL for the trace is that correct? Who consumes the URL?
Ys
