Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!
January 26, 2024
OpenAI has announced several new updates, including a series of new embedding models and that it is reducing prices for GPT-3.5 Turbo for the third time in a year.
GPT-3.5 Turbo input prices have now been reduced by 50% to $0.0005 per 1k tokens and output prices are reduced by 25% to $0.0015 /1K tokens.
That would put GPT-3.5 Turbo at a cost per 1k tokens less than Anthropic’s Claude 2.0 and 2.1, which costs $0.008/1K token inputs and $0.024/1K token outputs.
OpenAI also announced it’s introducing an updated version of GPT-3.5 Turbo, gpt-3.5-turbo-0125, which boasts improvements including higher accuracy at responding in requested formats and a fix for a bug that caused a text encoding issue for non-English language function calls.
The company is also releasing the updated GPT-4 Turbo model, gpt-4-0125-preview. The preview model can complete tasks more thoroughly than prior models and boasts a larger context window, around 85,333 words per input.
The new GPT-4 Turbo preview model also has lower prices.
A GPT-4 Turbo with vision capabilities will be coming to general availability “in the coming months,” OpenAI also announced.
OpenAI also unveiled new embedding models. An embedding model is a type of AI model that transforms high-dimensional data, like text or images, into a lower-dimensional space, allowing for easier computation of similarities between data points.
OpenAI revealed text-embedding-3-small, a new embedding model meant as an upgrade to its predecessor, text-embedding-ada-002, which was released in December 2022.
The new embedding model has improved abilities at multi-language retrieval and English language tasks.
It’s also five times cheaper to use than text-embedding-ada-002, available at a price per 1k tokens of just $0.00002.
There’s also a new large-scale embedding model: text-embedding-3-large. Boasting a stronger performance, the larger embedding model is priced at $0.00013 per 1k tokens.
Both of OpenAI’s new embedding models were trained using a new technique that allows developers to trade off performance and cost of using embeddings. Developers can now shorten embeddings without the embedding losing its concept-representing properties.
OpenAI also unveiled text-moderation-007, a new moderation model for identifying potentially harmful text.
The ChatGPT maker is trying to update its safety efforts in the wake of its boardroom shakeup. The new moderation model enables developers to check their content against OpenAI's usage policies to detect potentially problematic outputs.
Read more about:ChatGPT / Generative AI
Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.
You May Also Like
Generative AI Journeys with CDW UK's Chief TechnologistFeb 28, 2024
Qantm AI CEO on AI Strategy, Governance and Avoiding PitfallsFeb 14, 2024
Deloitte AI Institute Head: 5 Steps to Prepare Enterprises for an AI FutureJan 31, 2024
Athenahealth's Data Science Architect on Benefits of AI in Health CareJan 19, 2024