Connect with us

Crypto World

US military used Anthropic for Iran strike despite Trump’s ban: WSJ

Published

on

Crypto Breaking News

The US military reportedly relied on Anthropic’s Claude AI during a major air strike in Iran, a development that surfaced just hours after President Donald Trump ordered federal agencies to halt use of the model. Commands in the region, including CENTCOM, reportedly used Claude to support intelligence analysis, target vetting, and battlefield simulations. The episode highlights how deeply AI tooling has been woven into defense operations even as policymakers push to cut ties with certain vendors. The episode underscores a tension between executive directives and on-the-ground automation that could influence procurement and risk management across defense programs.

Key takeaways

Sentiment: Neutral

Market context: The episode sits at the intersection of defense procurement, AI ethics, and national-security risk management as agencies reassess vendor dependencies and the classification of AI tools for sensitive operations.

Why it matters

The incident offers a rare glimpse into how commercial AI models are integrated into high-stakes military workflows. Claude, originally designed for broad cognitive tasks, reportedly supported intelligence analysis and the modeling of battlefield scenarios, suggesting a level of operational trust that extends beyond lab environments into real-world missions. This raises important questions about the reliability, auditing, and controllability of AI in combat planning, especially when government policy signals shift rapidly around vendor usage.

At the policy level, the friction between a contracting relationship and a presidential directive highlights a broader debate about how AI vendors should be treated in secure environments. Anthropic’s refusal to grant unrestricted military use aligns with its stated ethical boundaries, signaling that private-sector providers may increasingly push back against configurations they deem ethically problematic. The Pentagon’s response—turning to alternative suppliers for classified workloads—illustrates how defense departments may diversify AI ecosystems to reduce risk exposure, while maintaining capability in sensitive operations.

Advertisement

The tension also touches on the competitive dynamics of the AI-as-a-service market. With OpenAI reportedly stepping in to provide models for classified networks, the sector is likely to witness continued experimentation and renegotiation of terms around security classifications, data governance, and supply-chain risk. The situation underscores the need for rigorous governance frameworks that can adapt to rapid technological change without compromising operational security or ethical standards.

What to watch next

  • Regulatory and policy updates from the Defense Department and the White House regarding AI vendor usage and security classifications.
  • Any new procurement or partnerships that extend AI capabilities for classified missions, including potential agreements with alternative providers to replace or supplement Anthropic’s offerings.
  • Public statements from Anthropic and OpenAI about the nature of deployments on secured networks and any new restrictions or guardrails.
  • Further details on the outcome of the earlier unrestricted-use negotiations and how that will shape future defense contracting with AI vendors.

Sources & verification

  • Reports about Claude’s use in a Middle East operation and the administration’s halt order, including evidence discussed with sources familiar with the matter.
  • Background on Anthropic’s Pentagon contract, including the multiyear arrangement worth up to $200 million and partnerships with Palantir and AWS for classified workflows.
  • Statements from Anthropic’s leadership and public comments on military use and ethical boundaries, including interviews and official responses to regulatory actions.
  • OpenAI’s deployment on classified networks and related discussions, including public discourse around a deal with the U.S. military and associated coverage.
  • Public discussions and social-media references connected to the OpenAI arrangement with the military, such as posts documenting industry reactions.

Anthropic’s Claude in the crosshairs: AI, ethics and policy collide in defense operations

Officials described Claude as playing a role in intelligence analysis and operational planning during a major air strike in Iran, a claim that illustrates how close AI tools have moved to battlefield decision-making. While the Trump administration moved to sever ties with Anthropic, the operational use of Claude reportedly persisted in certain commands, underscoring a disconnect between policy statements and day-to-day defense workflows. The practical reality is that AI-driven analyses, simulations, and risk assessments can slip into mission planning even as agencies reassess vendor risk and compliance requirements across departments.

The Pentagon’s prior engagement with Anthropic was substantial: a multiyear contract valued at up to $200 million and a network of partnerships, including Palantir and Amazon Web Services, that enabled Claude’s use in classified information handling and intelligence processing. The arrangement highlighted a broader strategy: diversify AI capabilities across a trusted ecosystem to ensure resilience in sensitive settings. Yet when policy directions shifted, the administration moved to reframe the vendor relationship, signaling a risk-based recalibration rather than a wholesale retreat from AI-enabled defense operations.

Behind the scenes, tensions between public policy and private sector ethics came to the fore. Defense Secretary Pete Hegseth reportedly pressed Anthropic to permit unrestricted military use of its models, a request that Anthropic’s leadership rejected as crossing ethical lines the company would not cross. The firm’s stance centers on the belief that certain uses—mass domestic surveillance and fully autonomous weapons—raise profound ethical and legal concerns, and that meaningful human oversight should survive the transition from concept to execution. This position aligns with ongoing debates about how to balance rapid AI adoption with safeguards against abuse and unintended consequences.

For its part, the Pentagon did not stand still. Facing a potential supplier gap, it began lining up replacements and reportedly reached an agreement with OpenAI to deploy models on classified networks. The shift underscores a broader strategic move to ensure continuity of capability, even as vendors re-evaluate their terms for sensitive deployments. The contrast between Anthropic’s ethical boundaries and the department’s operational needs reveals a broader policy tension: how to harness transformative technology responsibly while preserving national security imperatives.

Advertisement

Industry observers also noted the ecosystem effects of such transitions. The AI market is evolving toward more modular, security-cleared configurations that can be swapped or upgraded as policy and risk assessments shift. The OpenAI arrangement, in particular, signals continued appetite for integrating leading models into defense networks, albeit under stringent governance and oversight. While this trajectory promises enhanced capability for military analysts and planners, it also elevates scrutiny around data handling, model interpretability, and the risk of over-reliance on automated systems for critical decisions.

Anthropic’s CEO, Dario Amodei, has argued that while AI can augment human judgment, it cannot replace it in core defense decisions. In public remarks, he reaffirmed the company’s commitment to ethical boundaries and to maintaining human control in pivotal moments. The tension between maintaining access to cutting-edge tools and upholding ethical standards is likely to shape future negotiations with federal agencies, particularly as lawmakers and regulators scrutinize AI’s role in civilian and national-security contexts.

As the landscape evolves, the broader crypto and tech communities will be watching how these policy and procurement dynamics influence the development and deployment of advanced AI systems in high-stakes environments. The episode serves as a case study in balancing rapid technological advancement with governance, oversight, and the enduring question of where human responsibility ends and automated decision-making begins.

Risk & affiliate notice: Crypto assets are volatile and capital is at risk. This article may contain affiliate links. Read full disclosure

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Crypto World

Strategy Raises STRC Yield by 25 Basis Points to 11.50%

Published

on

Bitcoin Price, MicroStrategy, Michael Saylor, Companies

Strategy chairman Michael Saylor said in a social media post on Sunday that the largest Bitcoin (BTC) treasury company is raising the dividend on its STRC preferred stock, also known as “Stretch,” to 11.50% for March 2026, from the previous 11.25%.

STRC is perpetual, meaning the company is not obligated to buy back the stock at any specified date, and features a variable yield that changes monthly.

A Friday update on the company’s website confirmed Saylor’s post. “STRC’s dividend rate is adjusted monthly to encourage trading around STRC’s $100 par value and to help strip away price volatility,” according to the website. The dividend is also paid monthly. with the next payout date on March 31, to shareholders of record

In February, Strategy CEO Phong Le said the company is pivoting away from issuing common stock to fund its BTC purchases and toward issuing more preferred shares.

Advertisement
Bitcoin Price, MicroStrategy, Michael Saylor, Companies
Source: X.com, @saylor (Michael Saylor)

“Last year, a stretch and our perpetual preferreds raised $7 billion. That’s 33% of the entire preferred market,” Le said.

“As we go throughout the course of this year, we expect structure to be a big product for us,” he said, adding, “We will start to transition from equity capital to preferred capital.”

To be sure, the company continues to accumulate Bitcoin amid a market drawdown that has nearly halved the price of Bitcoin since October and driven down the share prices of digital asset treasury companies.

In the year to date, BTC has lost 23.2% of its value, while the share price of Bitwise Bitcoin Standard Corporations ETF (OWNB) is down 16.1%. That exchange-traded fund provides exposure to public companies holding significant amounts of Bitcoin on their balance sheets.

Bitcoin Price, MicroStrategy, Michael Saylor, Companies
A history of Strategy’s BTC purchases. Source: Strategy

Related: Strategy yield wrapper lands in Europe as 21Shares lists STRC ETP

Strategy records $12.4 billion loss in Q4 2025

Strategy in early February reported a net loss of $12.4 billion for the fourth quarter of 2025, leading to investors pushing the company’s share price down by 13% to about $107 per share. 

Advertisement

Despite revenue for the quarter increasing 1.9% year-over-year to about $123 million, the company’s stock has been in freefall.

Strategy’s (MSTR) common stock price briefly hit a high of $543 per share during intraday trading in November 2024, before falling back down below $300 in February 2025.

The company’s stock has fallen by about 75% since the November 2024 peak, closing on Friday at $129.50 a shares.

Bitcoin Price, MicroStrategy, Michael Saylor, Companies
Strategy’s stock performance over the last year. Source: Yahoo Finance

The price of BTC is trading well below Strategy’s average purchase cost of $76,020 per Bitcoin, according to data from the company.

Strategy’s last bought BTC during the week of Feb. 16, when the company purchased 592 BTC, valued at over $39.8 million, bringing its total holdings to 717,722 BTC, and marking its 100th BTC acquisition. 

Advertisement

Magazine: Bitcoin’s ‘biggest bull catalyst’ would be Saylor’s liquidation: Santiment founder