As the US administration proceeds to drop Anthropic as a supplier, many are rallying around the AI company’s relatively ethical stance, creating ‘unprecedented demand’ for Claude.
Anthropic’s Claude has been fast becoming the darling of the AI enthusiasts, for development, research and enterprise work. Now it is facing the might of the US administration which is threatening to drop it entirely as a supplier after a falling out with the Pentagon over so-called “red lines” it would not pass.
With many in Silicon Valley supporting its relatively principled stand, and general users sending it to the top of the US Apple charts in recent days for free downloads – beating OpenAI’s ChatGPT for the first time – its flagship Claude.ai and Claude Code apps went down for around three hours on Monday (2 March), causing many to bemoan its absence. There are already reports of further outages as we write, although its latest update says “a fix has been implemented and we are monitoring the results”.
In a nostalgic post on LinkedIn yesterday, regular contributor to Silicon Republic, AI aficionado Jonathan McCrea wrote: “I now feel the same way about Claude being down as I used to about Twitter being down.”
De facto boycott
Last night, treasury secretary Scott Bessent added his voice to the de facto US administration boycott of Anthropic products saying in a post on X that his department would terminate use of Anthropic products.
It follows a directive from president Donald Trump ordering US agencies to “phase out” their use of the AI company’s products, and his defence department labelling Anthropic a “supply-chain risk”, an allocation normally reserved for foreign suppliers from non-friendly states. Anthropic has been quick to say that this is a “legally unsound’ designation, and is expected to challenge the move in the courts.
Reuters is also reporting that it has seen memos to employees at the Department of Health and Human Services, asking them to switch to other AI platforms such as ChatGPT and Gemini, and at the State Department saying it was switching the model powering its in-house chatbot – StateChat – to OpenAI from Anthropic.
Financially it will surely deal a serious blow to Anthropic in the short term, but some commentators are arguing that it could be a pivotal moment for Anthropic as it may be seen by many as the relatively ethical choice when it comes to the AI giants.
The recent Grok scandal has put a major question mark over xAI’s credentials and OpenAI’s Sam Altman clearly sees the reputational risk as he has been quick to claim that it is ensuring some guardrails in its contract with the Pentagon.
On X yesterday Altman claimed that these guardrails would ensure OpenAI would not be “intentionally used for domestic surveillance of US persons and nationals”.
The backstory
If you haven’t been following, Anthropic drew the ire of the US administration after a standoff with the Pentagon, where Anthropic refused to change its safeguards related to using its AI for fully autonomous weapons, or for mass surveillance of US citizens.
On Thursday (February 27), Anthropic’s Dario Amodei released an official statement saying Anthropic believed that in “a narrow set of cases, we believe AI can undermine, rather than defend, democratic values”.
“Some uses are also simply outside the bounds of what today’s technology can safely and reliably do,” he said. “Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included.
“We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values.”
Amodei went on to say that partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. “But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk.”
It’s a debacle that is likely to roll on in coming days, and it remains to be seen whether Anthropic can withstand the unprecedented onslaught from its own government and rely on the support of users for its principled stand. In the short term, its challenge appears to be to meet the current demand on its systems.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.



