Deepanshu Do you currently use a gateway like LiteLLM? Is your question on the LLM gateway layer or more on the dynamic “choice” of what to use
I think it’s pretty use case dependent how you might build this. Depending on volume and delays needed
In co-pilot in Arize we leverage function calling to route to skills, some skills leverage reasoning models now. The decision on what to use is the function call “routing”
We use GPT-4o for this purpose in our product. In our case we use tools definition for routing to "skills" but you could also just use a normal LLM call as well (if you don't need parameter extraction)