Anthropic rejects Pentagon push on Claude safeguards

Anthropic rejects Pentagon push on Claude safeguards - GNcrypto

Anthropic CEO Dario Amodei said the company will not remove Claude restrictions on mass domestic surveillance and fully autonomous weapons, deepening a dispute with the U.S. Department of Defense that has put a $200 million contract under review and raised the possibility of broader government action.

Amodei said the Pentagon’s January position was that AI contractors must permit their systems to be used for “any lawful use,” but he wrote that Anthropic “cannot in good conscience accede” to a request that would require the company to drop the two specific guardrails it has maintained in its Defense-related contracts.

In his Feb. 26 (2026) statement, Amodei said Anthropic has deployed Claude into classified U.S. government networks and that the model is “extensively deployed” across the Department of War and other national security agencies for mission-critical work, including intelligence analysis, modeling and simulation, operational planning, and cyber operations.

The company framed its refusal as narrow and tied to what it calls democratic values and current technical limits. Amodei wrote that using frontier AI for mass domestic surveillance is incompatible with democratic norms and creates new civil-liberties risks, and he argued that today’s systems are not reliable enough to power fully autonomous weapons that select and engage targets without human involvement.

Anthropic said the Defense Department has warned it will only contract with AI firms that accept “any lawful use” language and remove safeguards in the two disputed areas, and it said officials have threatened to remove Anthropic from Defense systems if it maintains the restrictions. Amodei also said the Pentagon raised the possibility of designating Anthropic a “supply chain risk” and invoking the Defense Production Act, which he called contradictory given the government’s parallel claims about Claude’s importance to national security.

The dispute is unfolding as the Pentagon publicly pushes for faster adoption of the most advanced commercial AI models. A Jan. 9 memorandum laying out the Department of War’s AI strategy directs the department to become an “AI-first” warfighting force and calls for department-wide experimentation with leading AI models, alongside “pace-setting projects” intended to accelerate deployment across warfighting, intelligence, and enterprise missions.

Amodei’s stance also lands amid growing attention on how large language models behave in simulated crisis environments. A King’s College London analysis by Kenneth Payne described a series of nuclear crisis war games in which frontier models repeatedly escalated to nuclear use, an outcome highlighted as evidence that advanced systems can behave aggressively under strategic pressure and uncertainty.

A public-interest group, Public Citizen, criticized the Pentagon’s posture in comments included in the reporting, arguing the government is using public pressure to force AI firms to weaken guardrails and warning that a “supply chain risk” label could effectively shut a company out of major business markets beyond government procurement.

Anthropic said it prefers to continue supporting U.S. national security work while keeping the two safeguards in place, but added that if the Department chooses to offboard the company, it would work to enable a smooth transition to another provider to avoid disruption to ongoing missions.

The material on GNcrypto is intended solely for informational use and must not be regarded as financial advice. We make every effort to keep the content accurate and current, but we cannot warrant its precision, completeness, or reliability. GNcrypto does not take responsibility for any mistakes, omissions, or financial losses resulting from reliance on this information. Any actions you take based on this content are done at your own risk. Always conduct independent research and seek guidance from a qualified specialist. For further details, please review our Terms, Privacy Policy and Disclaimers.

Articles by this author