Issue with Arize Python SDK prompt provider validation blocking Anthropic prompts from Prompt Hub
Hi Arize team, We’re using the Arize Python SDK (arize==8.8.1) to fetch prompts from Prompt Hub, and we ran into an issue we’d like guidance on. If a prompt is created in Prompt Hub with a provider like Anthropic, fetching that prompt through the SDK can fail with a Pydantic validation error because the prompt provider appears to be validated against a narrower set of values such as:
openAI, azureOpenAI, awsBedrock, vertexAI, customIn our case, we only need the prompt template/messages from Prompt Hub. We do not use the provider metadata from the fetched prompt to decide which LLM provider to call at runtime, so this validation failure blocks prompt retrieval for metadata we don’t actually depend on. A few questions:
- 1.
Is this a known issue or expected behavior in the current SDK/API?
- 2.
Are you going to support Anthropic soon as a valid LLM provider for prompt hub?
- 3.
Would you recommend fetching prompts via the REST API instead of the Python SDK if the goal is only to retrieve prompt text/messages?
- 4.
Are there plans to expand the allowed prompt provider values or relax validation when reading prompts?
We’d appreciate your recommendation on the cleanest and most future-proof approach.
