Hi Guys I am seeing only this in explanation column: --
The explanation column showing only "--" typically means that no explanation was generated or logged for those evaluations. This can happen if the provide_explanation parameter was not set to True when running your evals, or if there was an issue with the model or function calling not returning explanations as expected. According to the documentation, you must set provide_explanation=True in functions like run_evals or llm_classify to get explanations in the output dataframe; otherwise, the explanation column will be empty or show a placeholder like "--" (docs, evals-with-explanations).If you did set provide_explanation=True and still see "--", it could be due to a bug or a model compatibility issue, especially if your model does not support function calling or the explanation prompt is not being parsed correctly (github issue). Would you like more detail on troubleshooting this or on how to ensure explanations are generated?Sources:
- Github Issue: Unable to Parse Results Using llama3.1-70b-instruct Model in Evaluation
- evals-with-explanations
馃挕聽Hint: Mention RunLLM in the thread for followups.
Using llmclassify Prompt ends like this:
[BEGIN DATA]
Ruby Script:
{{{ruby_script}}}
Typescript Script:
{{{ts_script}}}
[END DATA]
Please analyze the data carefully and provide an explanation followed by your response.
EXPLANATION: Provide your reasoning step by step, evaluating why the scripts are functionally the same or not.
LABEL: "correct" or "incorrect"
"""Template:
template = ClassificationTemplate(
rails = ['correct', 'incorrect'],
template = LOGIC_LLM_JUDGE_PROMPT,
delimiters = ("{{{", "}}}"),Eval:
with suppress_tracing():
logic_eval = llm_classify(
dataframe = code_gen_df,
template = template,
rails = ['correct', 'incorrect'],
model=omodel,
provide_explanation=True,
include_prompt=True,
verbose=True,
)correct
Yes i have seen explanations before
in the current installation
I am using launch_app() to start my instance
