Tech

Trump administration blacklisted Anthropic – now tells banks to use its AI

Published

on

In short: Treasury Secretary Scott Bessent and Fed Chair Jerome Powell are urging Wall Street’s biggest banks to test Anthropic’s Mythos AI model for cybersecurity vulnerabilities, even as the Pentagon fights Anthropic in court after branding it a supply chain risk for refusing to remove safety guardrails on autonomous weapons and mass surveillance. JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are all reportedly testing the model. Mythos, which found thousands of zero-day flaws across major operating systems and browsers, is being distributed through a restricted programme called Project Glasswing to roughly 50 organisations. UK regulators are also scrambling to assess the risks.

The Trump administration is quietly encouraging America’s largest banks to test the same AI company’s technology it has spent two months trying to destroy. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned executives from JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley this week and urged them to use Anthropic’s new Mythos model to detect cybersecurity vulnerabilities in their systems, according to Bloomberg.

The recommendation is remarkable for its contradiction. Anthropic is currently fighting the Department of Defense in federal court after Defense Secretary Pete Hegseth designated the company a “supply chain risk“, a label that bars it from military contracts and directs defence contractors to stop using its technology. The designation came after Anthropic refused to remove two safety restrictions from its AI models: no use in fully autonomous weapons, and no deployment for mass surveillance of American citizens.

Now, two of the administration’s most senior economic officials are telling Wall Street to adopt the very product the Pentagon has tried to blacklist.

Advertisement

What Mythos actually does

Claude Mythos Preview is a frontier model that Anthropic did not explicitly train for cybersecurity. The vulnerability-finding capability emerged as what the company describes as a downstream consequence of general improvements in code reasoning and autonomous operation. During testing, Mythos identified thousands of zero-day vulnerabilities, flaws previously unknown to software developers, across every major operating system and web browser.

The capabilities were significant enough that Anthropic chose not to release the model publicly. Instead, it launched Project Glasswing, a controlled programme giving access to roughly 50 organisations including Amazon Web Services, Apple, Google, Microsoft, Nvidia, Cisco, CrowdStrike, Palo Alto Networks, and JPMorgan Chase. Anthropic has committed up to $100 million in usage credits and $4 million in direct donations to open-source security organisations as part of the initiative.

The framing, a model “too dangerous to release“, has drawn scepticism. Tom’s Hardware noted that claims of “thousands” of severe zero-day discoveries relied on just 198 manual reviews, and that many of the flagged vulnerabilities were in older software or were impractical to exploit. Others in the security community suggested the restricted release looked less like responsible AI governance and more like a smart enterprise sales strategy: create scarcity, generate fear, and let the customers come to you.

The Pentagon paradox

The collision between the Bessent-Powell recommendation and the Hegseth designation is not a matter of mixed signals, it is two branches of the same administration pursuing openly contradictory policies toward the same company.

The Pentagon dispute began in February, when Hegseth gave Anthropic CEO Dario Amodei a Friday deadline to drop the company’s safety restrictions or lose its $200 million defence contract. Amodei refused. Hegseth responded by declaring Anthropic a supply chain risk and President Trump ordered federal agencies to stop using its technology. A Pentagon official accused Amodei of having a “God complex.” Trump called Anthropic a “radical left, woke company.

Advertisement

The courts have since split. A federal judge in California issued a preliminary injunction blocking the supply chain designation, writing that “nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government.” An appeals court in Washington, D.C., denied Anthropic’s request to temporarily halt the blacklisting while the case proceeds. The net effect: Anthropic is excluded from DoD contracts but can continue working with other government agencies.

It is into that gap, excluded from the Pentagon but not from the Treasury or the Fed, that Bessent and Powell stepped this week.

What the banks are actually doing

JPMorgan Chase was the only bank listed as an official Project Glasswing partner, but Bloomberg reported that Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are all testing Mythos internally. The use cases reportedly include vulnerability detection, fraud-risk flagging, and compliance workflow automation across financial systems.

The speed of adoption reflects a genuine fear. If Mythos can find zero-day vulnerabilities in operating systems and browsers, it can presumably find them in banking infrastructure too, and so can any sufficiently capable model that follows. The defensive logic is straightforward: better to find the holes before an adversary’s AI does.

Advertisement

The regulatory response has been international. The Financial Times reported that UK officials at the Bank of England, the Financial Conduct Authority, and HM Treasury are in discussions with the National Cyber Security Centre to examine potential vulnerabilities highlighted by Mythos. Representatives from major British banks, insurers, and exchanges are expected to be briefed within the fortnight.

The uncomfortable implication

The Mythos episode exposes a structural problem in the administration’s approach to AI. The same government that branded Anthropic a national security threat because it refused to remove safety guardrails is now urging the financial system to depend on Anthropic’s technology for its own security. The message to Anthropic is incoherent: you are too dangerous to trust with defence contracts, but indispensable enough that the Treasury Secretary personally phones bank CEOs to recommend your product.

For Anthropic, the contradiction is strategically useful. Every bank that adopts Mythos deepens the company’s integration into critical national infrastructure, making the supply chain designation look increasingly absurd. For the administration, the episode reveals what happens when national security policy is driven by personal grievance rather than coherent strategy: the left hand blacklists what the right hand is busy deploying.

The banks, for their part, appear untroubled by the contradiction. When the Treasury Secretary and the Fed Chair tell you to test something, you test it,  regardless of what the Pentagon thinks about the company that made it.

Advertisement

Source link

You must be logged in to post a comment Login

Leave a Reply

Cancel reply

Trending

Exit mobile version