OpenAI's big GPT-5 launch gets bumpy
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
OpenAI's GPT-5 has landed with a thud despite strong benchmark scores and praise from early testers.
Why it matters: A lot rides on every launch of a major new large language model, since training these programs is a massive endeavor that can require months or years and billions of dollars.
Driving the news: When OpenAI released GPT-5 last week, CEO Sam Altman promised the new model would give even free users of ChatGPT access to the equivalent of Ph.D.-level intelligence.
- But users quickly complained that the new model was struggling with basic tasks and lamented that they couldn't just stick with older models, such as GPT-4o.
Unhappy ChatGPTers took to social media, posting examples of GPT-5 making simple mistakes in math and geography and mocking the new model.
- Altman went into damage-control mode, acknowledging some early glitches, restoring the availability of earlier models and promising to increase access to the higher-level "reasoning" mode that allows GPT-5 to produce its best results.
Between the lines: There are several likely reasons for the underwhelming reaction to GPT-5.
- GPT-5 isn't one model, but a collection of models, including one that answers very quickly and others that use "reasoning" — taking additional computing time to answer better. The non-reasoning model doesn't appear to be nearly as much of a leap as the reasoning part.
- As Altman explained in a series of posts, early glitches in the model's rollout meant some queries weren't being properly routed to the reasoning model.
- GPT-5 appears to shine brightest at coding — particularly at taking an idea and turning it into a website or app. That's not a use case that generates examples tailor-made to go viral the way previous OpenAI releases, like its recent improved image generator, did.
Zoom out: GPT-5 took a lot longer to arrive than OpenAI originally expected and promised. In the meantime the company's leaders — like their competitors — kept upping the ante on just how golden the AI age is going to be.
- The more they have promised the moon, the greater the public disappointment when a milestone release proves more down-to-earth.
What they're saying: In posts on X and in a Reddit AMA on Friday, Altman promised that users' complaints were being addressed.
- "The autoswitcher broke and was out of commission for a chunk of the day, and the result was GPT-5 seemed way dumber," Altman said on Friday. "Also, we are making some interventions to how the decision boundary works that should help you get the right model more often."
- Altman pledged to increase access to reasoning capabilities and to restore the option of using older models.
- OpenAI also plans to change ChatGPT's interface to make it clearer which model is being used in any given response.
Altman also acknowledged in a later post recent stories about people becoming overly attached to AI models and said the company has been studying this trend over the past year.
- "It feels different and stronger than the kinds of attachment people have had to previous kinds of technology," he said, adding that "if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that."
Meanwhile, critics seized on the disappointments as vindication for their long-standing skepticism that generative AI is a precursor to greater-than-human intelligence.
- "My work here is truly done," longtime genAI critic Gary Marcus wrote on X. "Nobody with intellectual integrity can still believe that pure scaling will get us to AGI."
Yes, but: OpenAI's leaders argue that their scaling strategy is still reaping big dividends.
- "Our scaling laws still hold," the company's COO, Brad Lightcap, told Big Technology's Alex Kantrowitz.
- "Empirically, there's no reason to believe that there's any kind of diminishing return on pre-training. And on post-training" — the technique that supports models' newer "reasoning" capabilities — "we're really just starting to scratch the surface of that new paradigm."
Go deeper: Ina spoke with ABC News and NPR's "Here and Now" about GPT-5's bumpy rollout.
