English
Share
Sign In
🤗 OpenAI just got easier to use
Haebom
👍
If you subscribe through the subscription button, you can receive useful news every day.
It would be a great help if you could leave a comment and an emoji. It would be even better if you could share it!
Following the release of gpt-3.5-turbo and gpt-4 earlier this year, OpenAI has released updates to make development more efficient and cost-effective, driven by active API usage by developers.
The following new updates have been made:
Added ability to call new functions in Chat Completion API
Updated and tunable versions of gpt-4 and gpt-3.5-turbo released
New 16k context version of gpt-3.5-turbo released (compared to default 4k version)
75% reduction in cost of state-of-the-art embedding models
Reduced input token cost by 25% for gpt-3.5-turbo
The most important update is the Function feature.
The difficulty of API-based development has been further reduced... (It was already low)
The currently released GPT models must classify the user-entered prompt as intent or use an if statement to do something like 'if ㅇㅇ is included, say ㅁㅁ' in order to utilize real-time data or external data.
This is not something that goes back and forth from one API, but it is put into GPT again, the result is received again, and after several such operations, the answer is shown to the user... This function became much simpler as functions were added.
For example, let's take the question "What's the weather in Boston right now?" released by OpenAI.
Updates that incorporate some of the content discussed in the roundtable shared in the post below.
Subscribe to 'haebom'
📚 Welcome to Haebom's archives.
---
I post articles related to IT 💻, economy 💰, and humanities 🎭.
If you are curious about my thoughts, perspectives or interests, please subscribe.
Would you like to be notified when new articles are posted? 🔔 Yes, that means subscribe.
haebom@kakao.com
Subscribe
👍