Business

Trump Orders Federal Agencies to Halt Use of Anthropic AI Tools in Escalating Clash Over Military Safeguards

Published

on

President Donald Trump directed all U.S. federal agencies on February 27, 2026, to immediately cease using artificial intelligence technology developed by Anthropic, escalating a high-profile dispute with the San Francisco-based startup over safeguards on its Claude AI model for military applications.

In a Truth Social post, Trump accused Anthropic of attempting to “strong-arm” the Department of Defense — which he referred to as the Department of War — by imposing restrictions on how its technology could be deployed. “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution,” Trump wrote. “Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!”

Trump specified a six-month phase-out period for agencies, including the Pentagon, where Anthropic’s tools are already integrated into systems. He threatened further action if the company did not cooperate during the transition, including potential civil and criminal consequences.

The directive followed a Pentagon-imposed deadline that expired Friday evening for Anthropic to lift guardrails limiting Claude’s use in fully autonomous weapons or mass surveillance of U.S. citizens. Anthropic CEO Dario Amodei had publicly refused to remove the safeguards, arguing they were essential to prevent misuse that could undermine democratic values.

Shortly after Trump’s announcement, Defense Secretary Pete Hegseth designated Anthropic a “supply-chain risk to national security.” The label, typically applied to foreign adversaries, bars military contractors and suppliers from doing business with the company. The move effectively blacklists Anthropic from future federal defense work and could force vendors to certify non-use of its models.

Advertisement

The General Services Administration, which manages federal procurement, quickly complied by removing Anthropic from its USAi.gov platform and Multiple Award Schedule. GSA Administrator Edward C. announced the agency “stands with the President in rejecting attempts to politicize work dedicated to America’s national security.”

The clash highlights deep tensions between the Trump administration’s push for unrestricted military AI adoption and Silicon Valley’s efforts to impose ethical constraints. Anthropic, founded in 2021 by former OpenAI executives including Amodei, has positioned itself as a safety-focused alternative in the AI race, emphasizing constitutional AI principles to align models with human values.

Federal agencies had increasingly adopted Claude for tasks ranging from intelligence analysis to administrative automation, drawn to its strong performance in reasoning and coding. The ban could disrupt ongoing projects, particularly in defense and intelligence, where unwinding integrations may prove complex and costly.

In a swift counterdevelopment, OpenAI announced late Friday that it had reached an agreement with the Pentagon to supply its AI technology for classified systems. The deal positions OpenAI as a key alternative provider, potentially accelerating its military footprint amid the vacuum left by Anthropic’s exclusion.

Advertisement

Anthropic has not issued a formal response to the order as of February 28, but sources close to the company indicated it would challenge the designation legally, arguing the restrictions were narrowly tailored to prevent harmful applications while complying with existing laws. Industry observers noted the unprecedented nature of blacklisting a U.S.-based AI firm over usage terms.

Silicon Valley reactions split along ideological lines. Some leaders rallied behind Anthropic, praising its principled stance on AI safety. Others criticized the move as government overreach that could chill innovation and deter companies from pursuing defense contracts. Posts on X and other platforms reflected broader debate over balancing national security with ethical AI governance.

The administration framed the action as essential to ensuring the U.S. military retains full operational flexibility. Trump emphasized that no private company should dictate how the armed forces employ lawful tools. Supporters, including some Republican lawmakers, echoed the sentiment, viewing Anthropic’s safeguards as potential impediments in strategic competition with adversaries like China.

Critics, including civil liberties groups and some Democrats, expressed concern that removing guardrails could enable unchecked surveillance or autonomous lethal systems. They urged congressional oversight of the designation process and called for transparent guidelines on military AI use.

Advertisement

The order arrives amid broader Trump administration efforts to reshape federal technology policy, including accelerated AI adoption for government efficiency while prioritizing American dominance in the field. It also reflects ongoing friction with tech firms perceived as aligned with progressive values.

Anthropic’s valuation, once exceeding $60 billion, faces uncertainty with the loss of federal business, though the company maintains strong commercial and enterprise customers. Shares in related AI firms showed mixed movements in after-hours trading, with some analysts predicting a shift toward providers more amenable to defense needs.

As agencies begin the phase-out, questions linger about timelines, replacement costs and potential litigation. The episode underscores the growing intersection of AI ethics, national security and executive power in an era of rapid technological advancement.

Advertisement

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version