Anthropic filed two lawsuits against the Department of Defense on Monday, alleging that the government's decision to label the artificial intelligence firm a "supply chain risk" was illegal and violated its First Amendment rights. The two sides have been engaged in a heated debate for months, as the company attempts to implement safeguards against the military's potential use of its AI models for mass domestic surveillance or fully autonomous lethal weapons.
Anthropic filed these lawsuits in the Northern District Court of California and the US Court of Appeals for the Washington, D.C., Circuit after the Pentagon formally issued the supply chain risk designation last Thursday, marking the first time the blacklisting tool has been used against a US company. The AI firm had previously vowed to challenge the designation and demand that any company doing business with the government sever all ties with Anthropic, citing it as a serious threat to its business model.
Anthropic's lawsuit alleges that the Trump administration is punishing the company for refusing to comply with government-aligned demands, which violates its protected speech and attempts to punish the company for not complying.
Anthropic stated in its California lawsuit, "These actions have never happened before and are illegal. The Constitution does not permit the government to use its excessive power to punish a company for its protected speech."
Anthropic's AI model, called Cloud, has become deeply embedded in the Department of Defense over the past year. Until recently, Cloud was the only AI model approved for use in classified systems. The DoD has reportedly used it extensively in military operations, including determining where to target missile strikes in its fight against Iran.
Anthropic emphasized in its lawsuit that it remains committed to providing AI for national security purposes. The company also stated in its California lawsuit that it had previously worked with the DoD to adapt its systems for specific use cases. According to a statement, the company also intends to continue its dialogue with the government.
An Anthropic spokesperson said in a statement to the Guardian, "Seeking judicial review does not change our long-standing commitment to using AI to protect national security, but it is a necessary step to protect our business, our customers, and our partners." "We will continue to pursue all avenues for resolution, including dialogue with the government."
The AI firm alleges in the lawsuit that the Trump administration and Pentagon's punitive actions are causing "significant harm to Anthropic," a charge that contradicts Anthropic CEO Dario Amodei's statement to last week that "the impact of this designation is minimal" and that the company "will recover."
Anthropic alleges in its lawsuit, "Defendants seek to eliminate the economic value created by one of the world's fastest-growing private companies, a leader in the responsible development of a new technology vital to our country."
Thank you for reading this content.