President Trump ordered the U.S. government on Friday to stop using the artificial intelligence company Anthropic's products and the Pentagon moved to designate the company a national security risk, in a sharp escalation of a high-stakes fight over the military's use of AI.
The twin decisions cap an acrimonious dispute between Anthropic and the Pentagon over whether the company could prohibit its tools from being used in mass surveillance of American citizens or to power autonomous weapon systems, as part of a military contract worth up to $200 million.
"The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," Trump wrote on a Truth Social post. "Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again!"
He said there would be a six-month phaseout of Anthropic's products.
Trump's announcement came about an hour before a deadline set by the Pentagon, which had called on Anthropic to back down. Shortly after the deadline passed, Defense Secretary Pete Hegseth posted on X that he was labeling Anthropic a supply chain risk to national security, blacklisting it from working with the U.S. military or contractors.
"In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic," Hegseth wrote, using the Pentagon's "Department of War" rebranding. "Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service."
Anthropic didn't respond to an immediate request for comment.
Defense Department officials had given Anthropic a deadline of 5:01 p.m. ET on Friday to drop restrictions on its AI model, Claude, from being used for domestic mass surveillance or entirely autonomous weapons, or face losing its contract. The Pentagon has said it doesn't intend to use AI in those ways, but requires AI companies to allow their models to be used "for all lawful purposes."
The government had also threatened to invoke the Korean War-era Defense Production Act to compel Anthropic to allow use of its tools and, at the same time, warned it would label Anthropic a supply chain risk.
In his post carrying out the latter threat, Hegseth said Anthropic had "delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon." He accused the company of trying to "seize veto power over the operational decisions of the United States military."