OpenAI Dev Day Recap: Exciting New APIs and Innovations
OpenAI Dev Day: Wow these folks are shipping product 🚢 What is impressive to me is how much product they have put out while no one has caught up with them on GPT-4 yet. They have over a year of lead on the competition that feels to be growing. My quick take with probably more to add through the week as we dig in deeper. This was really an amazing set of solutions for the API centric crowd.
Assistants API: This is a powerful new API that combines RAG / Code Interpreter to help create the foundation for agents. There a couple other nice things added on with a stateful API of messages in a threads and improved function calling (multiple functions). In addition to just making code interpreter available through the API, this is a huge bet on Agents being the future of AGI. Its a bit more of a block box approach than other approaches in the ecosystem, in that you can't see or control everything that goes on inside OpenAI (RAG etc..). Excited to see what we can build on top of the API.
Hot take: Interesting to see how LangChain views this as it does feel directly competitive.
GPT-V now available in the API in addition to code interpreter above (assistants).
Big bet on voice with TTS now available in API with Whisper V3 coming.
Function calling: We love functions! Its the most underrated feature of OpenAI. They got faster/cleaner (clean JSON) & ability to call multiple in one go.
New JSON Mode: Guarantee model outputs are JSON (looks maybe independent of functions?)
Reproducible Outputs: Get consistent return values based on inputs to help debugging!
Context Length 128k: I know everyone is excited about 128K, we need to do more testing ourselves, we've seen a lot of performance fall off for data in the middle of the context... So I'm going to say jury is still out.
GPT Builder - Feels a bit like the Simple UI version of Assistants API maybe targeting a different user base that is less developer.
