is there any dedicated ressource for evaluating calls using structured outputs. almost all my llm calls are using pydantic schema
structured_llm = llm.with_structured_output(MyClass)
result = structured_llm.invoke(my_prompt)
In the prompt playground It would be insanely useful to be able to pass both the prompt and the class since with this approach all the MyClass pydoc and field description is taken into account, therefore my_prompt is minimal
I see there is an option for function call but not for that ^^