AI company Anthropic sued the Department of Defense in federal court after the Pentagon designated it a “supply chain risk” on February 27, 2026, effectively barring the firm from government contracts. The designation came after Anthropic refused to remove ethical restrictions preventing its Claude AI models from being used for mass surveillance and autonomous weapons systems.
Unprecedented Use of Security Powers
The Pentagon’s action marks the first time such a designation has been applied to a major U.S. technology company, according to Mayer Brown. Defense Secretary Pete Hegseth announced that all federal agencies must cease using Anthropic’s technology, invoking frameworks like the Federal Acquisition Supply Chain Security Act of 2018 that typically target foreign adversarial technology.
The dispute originated when the Pentagon sought to renegotiate its existing agreement with Anthropic to allow Claude AI models to be used “for all lawful purposes,” according to WIRED. The company refused to remove prohibitions against using its technology for mass domestic surveillance and fully autonomous weapons systems, leading to the breakdown in negotiations.
The designation’s impact extends far beyond Anthropic itself. Companies including Microsoft and Palantir, which integrate Claude models into government systems like Palantir’s Maven Smart System, face costly disruptions, the Center for American Progress reported. All government contractors are now prohibited from using or providing Anthropic’s products for federal work.
Legal Battle Intensifies

In early March, Anthropic filed suit in the U.S. District Court for the Northern District of California, arguing the designation constitutes unconstitutional retaliation. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” the company stated in its filing, characterizing the action as an “unlawful campaign of retaliation,” according to the Center for American Progress.
Microsoft quickly backed Anthropic with an amicus brief on March 11, warning that removing the widely integrated AI models could “potentially hamper US warfighters at a critical point in time,” Inc.com reported. The tech giant called the Pentagon’s directive “vague and ill-defined,” arguing that using national security tools to settle contract disputes undermines the public interest.
Support has emerged across Silicon Valley. Google DeepMind chief scientist Jeff Dean and other employees from Google and OpenAI filed their own amicus brief, warning the decision introduces “unpredictability” threatening American AI innovation, according to Yahoo Finance. OpenAI CEO Sam Altman publicly called it a “very bad decision.”
The case highlights fundamental tensions over who controls AI deployment terms in defense applications, potentially reshaping how the Pentagon procures cutting-edge technology from companies with strong ethical frameworks.
Sources
- wired.com
- finance.yahoo.com
- inc.com


























