OpenAI Releases DeepSeek Challenger ModelOpenAI Releases DeepSeek Challenger Model

Available to paid and free users, o3-mini targets STEM applications with enhanced reasoning capabilities

Berenice Baker, Editor

February 3, 2025

2 Min Read
OpenAI CEO Sam Altman
Getty Images

OpenAI has released its latest AI model, o3-mini, designed to deliver advanced reasoning capabilities with enhanced speed and cost efficiency.

The move comes amid increasing competition in the AI industry, particularly following the launch of DeepSeek's R1 model, which claims to offer comparable performance to OpenAI’s GPT-01.

O3-mini is available through ChatGPT and the OpenAI API for unlimited use by subscribers and with usage limits for free plan users. This marks the first time a reasoning model has been made available to free users in ChatGPT.

The new model particularly targets science, technology, engineering and mathematics (STEM) applications, offering improved performance in areas such as math, coding and scientific problem-solving.

It also supports features requested by developers, including function calling, structured outputs and developer messages, making it ready for production use.

Developers could choose between three reasoning effort options—low, medium and high—to optimize for specific use cases, allowing the model to "think harder" for complex challenges or prioritize speed when necessary.

In benchmark evaluations, o3-mini with medium reasoning effort matched the performance of the previous o1 model in math, coding and science tasks, while delivering faster responses. It also produced more accurate and clearer answers, with stronger reasoning abilities than o1-mini.

Related:OpenAI Launches ChatGPT Gov for US Government Agencies

“The release of OpenAI o3-mini marks another step in OpenAI’s mission to push the boundaries of cost-effective intelligence,” OpenAI wrote in a blog post accompanying the release.

“By optimizing reasoning for STEM domains while keeping costs low, we’re making high-quality AI even more accessible. This model continues our track record of driving down the cost of intelligence — reducing per-token pricing by 95% since launching GPT-4 — while maintaining top-tier reasoning capabilities. As AI adoption expands, we remain committed to leading at the frontier, building models that balance intelligence, efficiency and safety at scale.”

About the Author

Berenice Baker

Editor, Enter Quantum

Berenice is the editor of Enter Quantum and co-editor of AI Business. Berenice has a background in IT and 20 years of experience as a technology journalist.

Sign Up for the Newsletter
The most up-to-date AI news and insights delivered right to your inbox!

You May Also Like