OpenAI releases powerful new open models
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
OpenAI on Tuesday debuted two freely downloadable models that it says can, for certain tasks, match the performance of some modes of ChatGPT.
Why it matters: OpenAI is aiming the new models at customers who want the cost savings and privacy benefits that come from running AI models directly on their own devices rather than relying on cloud-based services like ChatGPT or its rivals.
- It's also pitching the open models for countries that want to avoid getting their AI tools from the cloud servers of Google, Microsoft or other tech giants.
The big picture: The arrival of China's DeepSeek earlier this year jolted the open-model world and suggested that China might be taking the lead in that category, while Meta's commitment to its open source Llama project has come into question as the company pivots to the "superintelligence" race.
What they're saying: "We're excited to make this model, the result of billions of dollars of research, available to the world to get AI into the hands of the most people possible," CEO Sam Altman said in a statement.
- "Going back to when we started in 2015, OpenAI's mission is to ensure AGI that benefits all of humanity," Altman said. "To that end, we are excited for the world to be building on an open AI stack created in the United States, based on democratic values, available for free to all and for wide benefit."
Driving the news: OpenAI is releasing two new open models, both capable of chain-of-thought reasoning and accessing the web. They can also, if desired, work in conjunction with larger cloud-based AI models.
- The first, a 117 billion parameter model called gpt-oss-120b, can run on a single GPU with 80 gigabytes of RAM.
- The second, with 21 billion parameters called gpt-oss-20b, is designed to run on laptops or other devices with 16 gigabytes of RAM.
- Both models are available via Hugging Face and other cloud providers. Microsoft is also making available a version of the smaller model that has been optimized to run on Windows devices.
- The company provided various benchmarks showing the open models performing at or near the performance of the company's o3 and o4-mini models.
Yes, but: The new open-models are text-only, as compared to most of OpenAI's recent models, which are so-called multimodal models capable of processing and outputting text, images, audio and video.
Between the lines: Technically, the models are "open weights" versus "open source," meaning anyone can download and fine-tune the models but there's no public access to other key information, like training data details.
- That's similar to DeepSeek and many of Meta's Llama models, but not as open as OLMo from the Allen Institute for AI.
- OpenAI declined to comment to Axios on what the new models were trained on or how the training may differ from that of its closed models.
- The company also declined to commit to a specific schedule for future open models. OpenAI hasn't released an open large language model since GPT-2 in 2019.
