Hi team, I am using Guardrails' Profanity Check. And previously I have added manual instrumentation and my traces are shown in phoenix. I haven't added instrumentation for guardrail, but it appears in phoenix (see image). In the library there are these files included for telemetry tracing (see image - it is inside the guardrails library package). So how to disable tracing for guardrails ?
RunLLM for tool in requested_tools: tool_instance = tools_dict[tool] tool_output = tool_instance.func("sample query") # Replace "sample query" with actual query if needed span.set_attribute(f"output.{tool_instance.name}", str(tool_output)) tool_list.append(tool_instance) span.set_attribute("output.value", str(tool_list)) return tool_list The "sample query" parameter is not correct.
tools_dict = {
"Search VectorDB": Tool(
name="Search VectorDB",
func=lambda query, init_pinecone=init_object_list[1], init_embeddings=init_object_list[0]:
primary_tool_functions.doc_search(query=query, pinecone=init_pinecone, embedding_instance=init_embeddings,
tracer=tracer),
description="""
useful for when assistant needs to query through vector DB and pick top most similar documents to the
given query
"""
),
"Query SQL DB": Tool(
name="Query SQL DB",
func=lambda query, init_sql=init_object_list[2], sql_prompt=init_object_list[3]:
primary_tool_functions.query_sql(query=query, sql_agent=init_sql, sql_agent_prompt=sql_prompt, tracer=tracer),
description="""
useful when assistant needs to query from SQL database for a given statistical question and return data
from SQL database
"""
),
"Create Graphs": Tool(
name="Create Graphs",
func=lambda query, init_llm_obj=init_object_list[4]:
primary_tool_functions.create_graphs(query=query, llm_instance=init_llm_obj, tracer=tracer),
description="""
useful when assistant needs to create graphs after retrieving the output from 'Query SQL DB' tool
"""
)
}Need to pass the 'query' argument in this lambda functions as the parameter for the "samplequery"
RunLLM But still the output looks like in the image, child spans are not displayed. Output
[Tool(name='Search VectorDB', description='\n useful for when assistant needs to query through vector DB and pick top most similar documents to the \n given query\n ', func=<function pick_tools.<locals>.search_vectordb at 0x000001C2B31922A0>)]RunLLM import logging import ast from langchain.tools import Tool from main.agents.tools import primary_tool_functions from opentelemetry.trace import Status, StatusCode # Define log levels logging.basicConfig(level=logging.INFO) def pick_tools(requested_tools: str, app_id: str, app_configs: dict, tracer): """ This method is used to return list of tool objects Parameters: requested_tools (str): requested tool list for the app. app_id (str): Current app id app_configs (dict): app wise objects Returns: list: tool objects """ with tracer.start_as_current_span("pick_tools") as span: span.set_attribute("openinference.span.kind", "TOOL") span.set_attribute("input.requested_tools", requested_tools) span.set_attribute("input.app_id", app_id) try: init_object_list = app_configs[app_id] requested_tools = ast.literal_eval(requested_tools) # to return requested tools tool_list = [] tools_dict = { "Search VectorDB": Tool( name="Search VectorDB", func=lambda query, init_pinecone=init_object_list[1], init_embeddings=init_object_list[0]: primary_tool_functions.doc_search(query=query, pinecone=init_pinecone, embedding_instance=init_embeddings, tracer=tracer), description=""" useful for when assistant needs to query through vector DB and pick top most similar documents to the given query """ ), "Query SQL DB": Tool( name="Query SQL DB", func=lambda query, init_sql=init_object_list[2], sql_prompt=init_object_list[3]: primary_tool_functions.query_sql(query=query, sql_agent=init_sql, sql_agent_prompt=sql_prompt, tracer=tracer), description=""" useful when assistant needs to query from SQL database for a given statistical question and return data from SQL database """ ), "Create Graphs": Tool( name="Create Graphs", func=lambda query, init_llm_obj=init_object_list[4]: primary_tool_functions.create_graphs(query=query, llm_instance=init_llm_obj, tracer=tracer), description=""" useful when assistant needs to create graphs after retrieving the output from 'Query SQL DB' tool """ ) } for tool in requested_tools: tool_list.append(tools_dict[tool]) span.set_attribute("output.value", str(tool_list)) return tool_list except Exception as e: span.set_status(Status(StatusCode.ERROR, str(e))) raise e Rewrite the dictionary without the lambda function. doc_searc, query_sql and create_graphs also has tracing in another python file. So those should be displayed as child spans in pick_tools span. Now it won't show the child spans. I think its because of the functions are using lambda.
RunLLM pick-tool trace's output looks like this.
[Tool(name='Search VectorDB', description='\n useful for when assistant needs to query through vector DB and pick top most similar documents to the \n given query\n ', func=<function pick_tools.<locals>.<lambda> at 0x00000245B117CEA0>)]How to set the output of the respective tool's trace's output
import logging
import ast
from langchain.tools import Tool
from main.agents.tools import primary_tool_functions
from opentelemetry.trace import Status, StatusCode
# Define log levels
logging.basicConfig(level=logging.INFO)
def pick_tools(requested_tools: str,
app_id: str,
app_configs: dict, tracer):
"""
This method is used to return list of tool objects
Parameters:
requested_tools (str): requested tool list for the app.
app_id (str): Current app id
app_configs (dict): app wise objects
Returns:
list: tool objects
"""
with tracer.start_as_current_span("pick_tools") as span:
span.set_attribute("openinference.span.kind", "TOOL")
span.set_attribute("input.requested_tools", requested_tools)
span.set_attribute("input.app_id", app_id)
try:
init_object_list = app_configs[app_id]
requested_tools = ast.literal_eval(requested_tools)
# to return requested tools
tool_list = []
tools_dict = {
"Search VectorDB": Tool(
name="Search VectorDB",
func=lambda query, init_pinecone=init_object_list[1], init_embeddings=init_object_list[0]:
primary_tool_functions.doc_search(query=query, pinecone=init_pinecone, embedding_instance=init_embeddings,
tracer=tracer),
description="""
useful for when assistant needs to query through vector DB and pick top most similar documents to the
given query
"""
),
"Query SQL DB": Tool(
name="Query SQL DB",
func=lambda query, init_sql=init_object_list[2], sql_prompt=init_object_list[3]:
primary_tool_functions.query_sql(query=query, sql_agent=init_sql, sql_agent_prompt=sql_prompt, tracer=tracer),
description="""
useful when assistant needs to query from SQL database for a given statistical question and return data
from SQL database
"""
),
"Create Graphs": Tool(
name="Create Graphs",
func=lambda query, init_llm_obj=init_object_list[4]:
primary_tool_functions.create_graphs(query=query, llm_instance=init_llm_obj, tracer=tracer),
description="""
useful when assistant needs to create graphs after retrieving the output from 'Query SQL DB' tool
"""
)
}
for tool in requested_tools:
tool_list.append(tools_dict[tool])
span.set_attribute("output.value", str(tool_list))
return tool_list
except Exception as e:
span.set_status(Status(StatusCode.ERROR, str(e)))
raise e
import ast
import opentelemetry.trace as trace
from opentelemetry.trace.status import Status, StatusCode
# Tool 1
def doc_search(query, pinecone, embedding_instance, tracer):
with tracer.start_as_current_span("document-search") as span:
try:
span.set_attribute("openinference.span.kind", "RETRIEVER")
span.set_attribute("input.query", query)
embedded_query = embedding_instance.embed(query)
retrieved_docs, source, topic, page_link = pinecone.return_docs(embedded_query, 4)
span.set_attribute("output.retrieved_docs", str(retrieved_docs))
span.set_status(Status(StatusCode.OK))
except Exception as e:
span.set_status(Status(StatusCode.ERROR, str(e)))
raise
return retrieved_docs, source, topic, page_link
# Tool 2
def query_sql(query, sql_agent, sql_agent_prompt, tracer):
with tracer.start_as_current_span("query_sql") as span:
try:
span.set_attribute("openinference.span.kind", "TOOL")
span.set_attribute("input.query", query)
span.set_attribute("input.prompt", sql_agent_prompt)
generated_query = sql_agent.get_query(query, sql_agent_prompt)
span.set_attribute("output.generated_query", generated_query)
span.set_status(Status(StatusCode.OK))
except Exception as e:
span.set_status(Status(StatusCode.ERROR, str(e)))
raise
return generated_query
# Tool 3
def create_graphs(query, llm_instance, tracer):
with tracer.start_as_current_span("create_graphs") as span:
try:
span.set_attribute("openinference.span.kind", "TOOL")
span.set_attribute("input.query", query)
_plotting_prompt = """
Assistant will be provided with 2D list. What assistant has to do is create a Mermaid.js script to generate a
bar graph or pie chart from the given 2D list. Assistant can choose what should be taked as X and Y axis.
Output must be only a Mermaid.js script and nothin else rather than the Mermaid.js script.
Here are some sample user queries and assistant outputs,
```
User : [["amal", "kamal", "anura"], [2, 3, 4]]
Assistant:
bar
title Users and their scores
x-axis Users
y-axis Scores
"amal" : 2
"kamal" : 3
"anura" : 4
```
"""
response = llm_instance.invoke(_plotting_prompt + f"\n 2D list : {query}")
formatted_response = f"<div class='mermaid-graph'>{response}</div>"
span.set_attribute("output.response", formatted_response)
span.set_status(Status(StatusCode.OK))
return formatted_response
except Exception as e:
span.set_status(Status(StatusCode.ERROR, str(e)))
return formatted_responseFor the relevant tool picked, the trace should display the output of the selected tool's trace. For example lets say the selected tool is 'Search VectorDB', then the trace output should e the output of the 'doc_search()' function within the 'pick tools' function. See the image. under 'pick-tools' phoenix should show the doc_search methods trace details.
Oh Okay. Thank you very much for the help. 鉂わ笍
What are the changes I can make ?
def setup_tracing(app_name):
resource = Resource(attributes={
ResourceAttributes.PROJECT_NAME: f'{app_name}'
})
tracer_provider = TracerProvider(resource=resource)
trace.set_tracer_provider(tracer_provider)
tracer = trace.get_tracer(__name__)
collector_endpoint = f"http://localhost:{get_env_port()}/v1/traces"
span_exporter = OTLPSpanExporter(endpoint=collector_endpoint)
simple_span_processor = SimpleSpanProcessor(span_exporter=span_exporter)
trace.get_tracer_provider().add_span_processor(simple_span_processor)
return tracer
# managing tracer objects in cache
session = px.launch_app(use_temp_dir=False)
tracers = {}
@api_view(['POST'])
def query(request, app_id, thread_id):
if thread_id not in tracers.keys():
tracer = tracing.setup_tracing(thread_id)
tracers[thread_id] = tracer
else:
tracer = tracers[thread_id]
This is the code. This was created by another person. I have to implement the code to get the trace_link.
