OpenAI reports 30% reduction in political bias

OpenAI researchers published a report showing that its latest models - GPT-5 Instant and GPT-5 Thinking - exhibit 30% less political bias than prior generations, including GPT-4o and o3. The company has developed a framework for evaluating model neutrality.
The study was conducted by the company’s Model Behavior division, led by Joan Zhang, which focuses on developing methods to measure and reduce political bias in large language models. The team tested the models on 500 politically charged prompts, ranging from neutral to highly emotional, to analyze how ChatGPT handled different formulations.
According to the results, GPT-5 models produced more balanced responses, avoided emotional judgments, and showed less inclination toward any particular ideology. Researcher Natalie Staudacher called this “OpenAI’s most significant step yet toward measurable neutrality.”
The report notes that even under stress tests, when the model faced provocative or loaded questions, the level of political bias remained “low and infrequent.” Staudacher added, “Millions of people use ChatGPT to understand the world. By defining what bias means, we can create transparent standards and accountability.”
The study follows OpenAI’s annual DevDay conference, where the company showcased new tools for building applications on top of ChatGPT. While DevDay highlighted the model’s functionality, the new research focuses on its behavioral neutrality.
Experts note that OpenAI aims to rebuild public trust after criticism regarding ideological bias and influence on public opinion. The company’s new bias-measurement framework could become an industry standard for organizations developing generative AI systems.
Earlier this week, the a16z Podcast published an interview with Sam Altman, in which he discussed OpenAI’s plans for future interfaces, monetization models, and the long-term pursuit of artificial general intelligence (AGI).
