thanks for getting back to me. do you have a rough ETA for when you expect to complete the full sdk and have it leave alpha / beta stage?
async client: i just found that it's not thread safe either, so that's quite annoying, but we can work around that with a lock for now
we're using the v7 python package for prompt management, because the v8 doesn't have the format and to_openai_kwargs helpers anymore.
the to_openai_kwargs helper doesn't pass the llm_parameters configured in arize, only the model name. so temperature, etc is not passed through, which is surprising, given the naming
there's no support for setting custom kv pairs in a model preset in the AX ui, and then pass it through to the openai SDK invocation. eg I can't set this in the prompt hub
{
"text": {
"verbosity": "low"
}
}basically the use case is set a model in arize ax, test drive it in the playground on a prompt, and in evals, commit the prompt with the model to the prompt hub, and then expect our application to use the exact same model + parameters, as was used in the playground and on evals
Hi. I'm evaluating Arize as an alternative to Langsmith, which we've been using so far, From what i can tell it looks like Arize is in the middle of migrating from Phoenix Cloud to AX as the primary product offering, with APIs changing from v1 to v2, auth flows redirecting to AX, but silently still supporting Phoenix Cloud. But it also looks like AX doesn't have full feature parity with Phoenix Cloud yet either. Frankly the experience right now is confusing. Is there a blog post or documentation explaining the roadmap of when AX will be on par? Gaps that I have noticed so far poking at it:
AX doesn't have a python async client (have to use async to_thread)
With the AX python sdk i can't fetch a prompt from prompt hub by name, only by id. To get it by name, i have to use the list endpoint, map name to id and then get the prompt object by id
AX does not have a model cost config for gpt-5.4 (phoenix cloud does)
