Politics

Palantir face question of AI in military ops

Published

on

The Pentagon deployed AI technology linked to Palantir to kidnap Venezuelan president Nicolas Maduro. The AI program concerned, Claude, is integral to Palantir systems used by the US military. The settler-colonial state of Israel is Claude’s biggest per capita user.

Tech firm Anthropic developed the program. Claude is used within Palantir systems wielded by the Pentagon. Sources told the Wall Street Journal (WSJ) on 15 February the use of AI in the 3 January raid showed how:

AI models are gaining traction in the Pentagon.

But there is a problem. Anthropic has strict rules on military usage:

Anthropic’s usage guidelines prohibit Claude from being used to facilitate violence, develop weapons or conduct surveillance.

The sources said:

Advertisement

The deployment of Claude occurred through Anthropic’s partnership with data company Palantir Technologies, whose tools are commonly used by the Defense Department and federal law enforcement.

Anthropic’s programs can be used:

for everything from summarizing documents to controlling autonomous drones.

But could Anthropic’s ‘ethics guidelines’ have been breached?

Questions are being asked of Palantir

Questions were asked within the firm after the Caracas raid:

Following the raid, an employee at Anthropic asked a counterpart at Palantir how Claude was used in the operation, according to people familiar with the matter.

An Anthropic spokesperson said:

Advertisement

We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise.

They added:

Any use of Claude—whether in the private sector or across government—is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance.

The Open Tools tech website said Claude is a chatbot:

Claude, a chatbot developed by Anthropic, has seen diverse adoption patterns across the globe, with notable variances based on national economic statuses and technological infrastructure.

Open Tools reported that Israel in the highest per capita user of the program:

The Anthropic AI Usage Index (AUI) places Israel at the top of the leaderboard for Claude usage per capita, signifying not just a quantitative but qualitative edge in how AI is utilized across sectors in the country.

The WSJ reported Anthropic’s strict rules on ‘defence’ use might see the Pentagon divest. Chief Pentagon spokesman Sean Parnell said the US military’s relationship with Anthropic was “under review”:

Advertisement

Our nation requires that our partners be willing to help our warfighters win in any fight.

Sources told WSJ the guidelines might endanger the $200mn contract awarded in summer 2025. Anthropic Chief Executive Dario Amodei has:

 publicly expressed concern about AI’s use in autonomous lethal operations and domestic surveillance.

These are the “two major sticking points”

Defence secretary Pete Hegseth has said the US doesn’t want to use:

AI models that won’t allow you to fight wars.

Donald Trump’s shadow war in Latin America isn’t over, despite attention moving elsewhere after the 3 January Caracas raid.

Advertisement

Drones, raids and Israel

Nicolas Maduro is in a New York jail. Vice-president Delcy Rodriguez is running Venezuela in his absence. Venezuela’s left-wing government is still in power – if only in theory. Venezuela is shipping oil to Israel, for example.

The US was still hitting ‘narco’ boats in the Caribbean as of 13 February:

The US boarded another tanker loaded with Venezuela oil on 15 February. This time in the Indian Ocean:

Advertisement

The vessel tried to defy President Trump’s quarantine —hoping to slip away. We tracked it from the Caribbean to the Indian Ocean, closed the distance, and shut it down. No other nation has the reach, endurance, or will to do this.

The US hasn’t finished its imperial interference in Latin American. The new Venezuelan leader is more pliable, but US piracy and drone strikes are still underway. The US has deeply embedded AI in its warfighting. Anthropic’s ethics might sink the Claude contract, but there are dozens of other AI firms ready to step in and take on lucrative military contracts.

Advertisement

Featured image via the Canary

Source link

Advertisement

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version