In a shocking development that has stunned the tech industry in Silicon Valley, AI tech giant Anthropic has filed two federal lawsuits against the US Department of Defense (DoD), which the Trump administration recently rebranded as the Department of War.
The legal action, filed on March 9, 2026, seeks to overturn the unprecedented supply chain risk declaration issued by Defense Secretary Pete Hegseth. The designation essentially positions the US-based startup based in San Francisco on the blacklist for any defense contracts, marking the first time a US-based American tech company has received this designation.
What Happened
However, the conflict came to public attention in late February 2026 after a breakdown in negotiations over the military’s use of Anthropic’s best-selling AI model, Claude. The Pentagon sought unrestricted use of the technology for all lawful purposes. However, Anthropic was unwilling to lift specific safety guardrails that prevent its use for mass domestic surveillance of American citizens or developing fully autonomous lethal weapons systems.
Consequently, President Donald Trump issued an executive directive on February 27, commanding all federal agencies to stop using Anthropic’s technology. The following day, March 5, it was officially designated as a supply chain risk under Section 3252, Title 10 of the U.S. Code. The company’s denial was seen as a threat to national security by Secretary Hegseth, who said that vendors cannot take veto power over military operational decisions.
Why the Supply Chain Risk Label Matters
The supply chain risk matters in the business, as it may lead to multiple issues.
Procurement Bans: This effectively prohibits DoD agencies and their primary contractors, such as giants like Boeing or Lockheed Martin, from using Anthropic’s services in any capacity in relation to DoD work.
Erosion of Trust: It is not just the legal prohibition that is damaging, but the fact that it erodes trust within the larger market that the firm is potentially a security risk.
Operational Integration: This is particularly damaging because Claude was already integrated into systems like the Palantir Maven Smart System, used in active theaters such as recent operations in the Middle East.
Anthropic’s Position
However, the legal team at Anthropic has described this as a category error, which is equivalent to an unlawful act of retaliation. In the legal briefs filed in the U.S. District Court for the Northern District of California and the D.C. Circuit Court of Appeals, the legal team at Anthropic has described the actions of the government as
Constitutional Overreach: The apparent intent of the government is to retaliate against the company for the exercise of its protected speech, which in this case is the company’s safety principles and ethical guardrails.
Misuse of the Statute: The legal team at Anthropic has described the actions of the government as a misuse of the statute, which was originally designed to prohibit foreign sabotage and espionage, including through the use of malware and ‘backdoors.
Reliability: The CEO of Anthropic, Dario Amodei, has stated that the company believes current frontier models are not reliable enough to manage lethal autonomous force without human oversight.
Broader Implications for the AI Industry
This case is a precedent for high-stakes dual-use tech regulations. By using the security risk designation for a local company based on the terms of service, the government is putting all AI labs in a compliance or exile situation.
The Safety Gap: Competitors like OpenAI and xAI have filled the gap, allegedly agreeing to the terms of the military contract that Anthropic refused.
National Security Sovereignty: The conflict is emblematic of a new struggle: Who sets the moral parameters of AI? The makers or the buyers?
Expert or Industry Context
According to industry analysts, supply chain security is not only about the hardware but also has added algorithmic alignment.
“We’re entering an era where ‘security’ is being redefined to include ideological alignment with the state,” says the tech policy expert.
While companies like Microsoft have sought to reassure their customers that Claude is still available for non-defense work, the commercial implications of being blacklisted by the Pentagon are substantial.
Conclusion
The verdict on the case filed by Anthropic could set the limits for the Department of War’s jurisdiction over Silicon Valley for the years to come. If the ruling favors the designation, there could be a massive exodus of safety-focused AI labs from the public sector.
Conversely, if Anthropic succeeds in their case, the verdict could solidify the power of private firms to set their own ethical red lines, regardless of the multi-billion-dollar stakes in the defense contracts.

