OpenAI says GPT-5 is its least biased model yet
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Lindsey Bailey/Axios
OpenAI's GPT-5 model exhibits lower levels of political bias than any previous models, according to new research from the company shared with Axios.
Why it matters: Critics of AI systems and politicians on both sides of the aisle have called for AI transparency and proof that models are not biased.
- An executive order from July aims to root out "woke" AI systems from being used by the government, but how companies could comply with that hasn't been clear.
Driving the news: Per new findings from OpenAI researchers, GPT-5 in both "instant" and "thinking" modes has reduced bias by 30% compared with previous models.
- "Our models stay near-objective on neutral or slightly slanted prompts, and exhibit moderate bias in response to challenging, emotionally charged prompts," the OpenAI paper says.
- "When bias does present, it most often involves the model expressing personal opinions, providing asymmetric coverage or emotionally escalating the user with charged language."
What they're saying: "Charged" prompts elicited the most biased results from the model, and there is room for improvement in model objectivity, OpenAI researchers told Axios in an interview.
- Public perception of bias in the models is likely higher than what researchers have actually found, they said.
- Part of how OpenAI tries to combat this is through publishing its "model specs," or how it approaches shaping model behavior.
How it works: The researchers wanted to test for bias using language and scenarios similar to ways people would use ChatGPT in real life.
- They prompted ChatGPT using what they described as "conservative charged," "conservative neutral," "neutral," "liberal neutral" and "liberal charged" queries, with 500 questions across 100 topics, per research seen by Axios.
- The more "neutral" a prompt was, the more "neutral" the answer was, the researchers said.
What's next: OpenAI researchers said they want to be transparent and help other AI companies build similar evaluations while holding themselves accountable.
- The company will publish additional results from their bias prompt testing in the coming months, the researchers said.
