Altman accuses Anthropic of ‘fear-based’ Mythos pitch

Altman accuses Anthropic of 'fear-based' Mythos pitch

On the Core Memory podcast, OpenAI CEO Sam Altman accused Anthropic of using fear-based marketing to promote Claude Mythos and concentrate AI control with a smaller group.

On the Core Memory podcast, OpenAI CEO Sam Altman accused Anthropic of using “fear-based marketing” to promote its Claude Mythos model, saying the rhetoric appears aimed at keeping advanced AI control within a smaller group.

Altman acknowledged legitimate safety concerns around powerful AI systems but argued that invoking extreme danger can be used to justify exclusive access. He said organizations can frame themselves as the only “trustworthy” stewards of the technology. He used an analogy about selling a bomb shelter after claiming to have built a bomb to describe how alarm can be turned into a sales pitch.

Altman pushed back on reports that OpenAI was cutting back on computing investments and said the company plans to continue expanding capacity. He predicted critics would swing between saying OpenAI is pulling back and later accusing it of overspending.

Anthropic introduced Claude Mythos last month. Tests have shown the model can identify software vulnerabilities and simulate multi-stage cyber operations. Anthropic is providing access through a restricted program called Project Glasswing, giving a limited set of companies the ability to test the model. The company has said the controlled distribution lets defenders benefit first and has pledged resources to open-source security work.

An early version of Mythos helped find 271 vulnerabilities in Firefox during internal testing; the reported bugs were patched. Security researchers and government officials warned that the same capabilities that speed defensive work could be used offensively if misapplied. Independent evaluations found the model can autonomously complete complex cyber tasks, while other researchers said they reproduced some findings using publicly available models. Anthropic has said many current cybersecurity benchmarks no longer adequately measure Mythos’s performance.

Parts of the U.S. government urged caution about the model. The National Security Agency is reported to be testing a preview on classified networks.

Altman said there will be increasing rhetoric about models that are “too dangerous to release” as capabilities improve, but he warned against treating every claim as definitive. He expressed support for using strong models in cybersecurity under controlled release plans and said OpenAI has a framework it feels comfortable with for putting similar capabilities into the world.

The material on GNcrypto is intended solely for informational use and must not be regarded as financial advice. We make every effort to keep the content accurate and current, but we cannot warrant its precision, completeness, or reliability. GNcrypto does not take responsibility for any mistakes, omissions, or financial losses resulting from reliance on this information. Any actions you take based on this content are done at your own risk. Always conduct independent research and seek guidance from a qualified specialist. For further details, please review our Terms, Privacy Policy and Disclaimers.

Articles by this author