Judge backs Pentagon in Anthropic supply-chain ruling

A federal judge denied Anthropic’s request to block the Pentagon’s designation of the AI company as a supply-chain risk, leaving restrictions on some government work in place.

A federal judge denied Anthropic’s request for an emergency order and upheld the Pentagon’s designation of the company as a “supply-chain risk,” keeping limits on the company’s participation in certain government programs in effect. The ruling was issued in the company’s first direct legal challenge to a Defense Department security determination.

Anthropic sued after the Defense Department identified the company as a potential risk to its supply chain. The company argued the designation was improper and procedurally flawed and asked the court to enjoin enforcement while the case proceeds. The judge reviewed the Defense Department’s administrative record and concluded the department acted within its statutory authority to protect national security interests.

The court found the administrative record provided a sufficient factual basis for the designation and that Anthropic had not shown a likelihood of success on the merits that would justify emergency relief. As a result, the Pentagon’s restrictions remain effective pending further litigation or appeal.

The designation can have practical consequences for Anthropic’s work with federal agencies. Similar findings are often used to exclude companies from certain procurement streams, restrict access to sensitive networks, or require heightened oversight before awarding contracts. For Anthropic, the designation may hinder efforts to expand sales to defense and intelligence customers and to work with contractors that support classified or critical government systems.

In the filing, Anthropic asserted the Pentagon failed to follow required procedures and relied on unspecified concerns. The Defense Department submitted documents describing AI safeguards vulnerabilities and the reasons for limiting the company’s role in defense supply chains. The judge considered those materials and determined they supported the department’s decision, denying the requested preliminary injunction.

The case comes as U.S. agencies increase scrutiny of advanced AI providers and of supply-chain vulnerabilities in the technology sector. The Pentagon has been developing policies to manage risks from software, hardware and services that could affect military readiness or expose sensitive information. Agencies have used statutory authorities to screen suppliers and to block access to defense systems when security questions remain unresolved.

Anthropic, founded by former AI researchers and developers, builds large language models for commercial and enterprise customers and has sought government business while emphasizing safety and alignment research. The company can appeal the ruling or continue to litigate the underlying merits of the designation.

Legal experts characterize the dispute as a test of how courts balance agency deference on technical national-security judgments against a company’s ability to challenge administrative determinations that affect its business. The judge noted courts typically defer to agencies on such technical judgments unless there is a clear legal error or a lack of supporting evidence.

The material on GNcrypto is intended solely for informational use and must not be regarded as financial advice. We make every effort to keep the content accurate and current, but we cannot warrant its precision, completeness, or reliability. GNcrypto does not take responsibility for any mistakes, omissions, or financial losses resulting from reliance on this information. Any actions you take based on this content are done at your own risk. Always conduct independent research and seek guidance from a qualified specialist. For further details, please review our Terms, Privacy Policy and Disclaimers.

Articles by this author