Trump Orders Federal Agencies to Halt Use of Anthropic AI Technology Amid Ethics Dispute
February 28, 2026 — In a dramatic escalation of tensions over artificial intelligence ethics, former President Donald Trump has directed all U.S. federal agencies to immediately stop using technology developed by the AI company Anthropic. The order comes amid a standoff between Anthropic and the Department of Defense (DoD) over the ethical safeguards embedded in Anthropic’s AI systems.
Breakdown of Talks Between Anthropic and the Pentagon
The dispute centers on a breakdown in negotiations between the Pentagon and Anthropic regarding the use of the company’s Claude AI system for military applications. The Department of Defense demanded that Anthropic relax its strict ethical guidelines, specifically seeking fewer restrictions that would allow the AI to be used for mass surveillance or autonomous weapon systems. Anthropic, recognized as one of the most safety-conscious AI firms, refused to compromise on these key safety guardrails.
Just hours before a critical Friday deadline for an agreement, Trump issued a statement on his social media platform Truth Social sharply criticizing Anthropic’s position. He accused the company of trying to "strong-arm" the military and indicated that the fate of U.S. national security should not be dictated by a "radical left AI company."
Department of Defense’s Firm Response
Following the deadline’s expiration without a deal, Defense Secretary Pete Hegseth classified Anthropic as a supply-chain risk to U.S. national security. This designation, typically reserved for foreign adversaries, effectively bans any military contractors, suppliers, or partners from conducting business with Anthropic. Hegseth emphasized that U.S. warfighters “will never be held hostage by the ideological whims of Big Tech.” The Pentagon, which had a $200 million two-year contract with Anthropic, announced a transitional period of up to six months during which existing services could still be used while a complete severance unfolds.
The General Services Administration (GSA) followed suit, terminating its contracts with Anthropic on Friday evening.
Anthropic’s Response and Legal Challenge
Anthropic issued a statement expressing sadness over recent developments and denied receiving direct communication from the DoD or White House about the negotiations’ status. The company vowed to legally challenge the supply-chain risk designation, calling the move "unprecedented" for an American company.
Anthropic reiterated its refusal to allow its AI systems to be used in mass domestic surveillance or fully autonomous weapons, stating these are “narrow exceptions” that have not impacted any government missions so far. The company affirmed its support for lawful uses of AI in national security but remained firm on its ethical stances.
OpenAI Steps In With New Pentagon Deal
In a turn of events hours after Anthropic’s ouster, OpenAI, another major AI developer and Anthropic’s competitor, announced it had secured a new contract to provide AI technology to classified Pentagon networks. OpenAI CEO Sam Altman emphasized that their agreement includes the same ethical guardrails that had led to the fallout with Anthropic, particularly bans on mass surveillance and ensuring human control over the use of force.
Altman expressed hope that the Pentagon would apply these safety principles uniformly across all AI companies to foster cooperation and avoid legal conflicts.
Industry Reactions and Broader Implications
The public disagreement has spotlighted growing tensions between the U.S. military’s operational demands and AI companies’ ethical commitments. Pentagon officials, including spokesperson Sean Parnell, denied allegations that the DoD sought mass surveillance or fully autonomous weapon systems, calling such narratives “fake” and politically motivated.
Supporters of Anthropic within Silicon Valley, including OpenAI executives and employees at both OpenAI and Google, have expressed solidarity with Anthropic’s stance. Nearly 500 employees from OpenAI and Google signed an open letter warning against divisive tactics aimed at pressuring companies to concede on key ethical concerns.
What’s Next?
While the federal government has taken a hard line against Anthropic, dialogue could still resume between the company and the Pentagon. Alternatively, other AI providers may take over the military contracts previously held by Anthropic.
This episode underscores the complex balance between advancing AI technology for national security and upholding ethical standards—a debate that is likely to intensify as artificial intelligence becomes increasingly integrated with defense operations.
Contact Information:
For confidential tips or information on this story, readers can utilize secure messaging tools available through The Guardian app or SecureDrop to communicate privately with journalists.
This report is based on information originally published by The Guardian on February 27-28, 2026.






