Politics

Gaza was a testing site for horrific military AI

Published

on

The UK has awarded 26 arms firms lucrative contracts to develop its own autonomous targeting systems –  this is despite numerous atrocities in Gaza – and now, Iran – being linked with haphazard AI kill chain systems. The UK NGO Drone Wars reported:

that as public concern about the use of AI for warfighting grows in the aftermath of Israel’s war on Gaza and US strikes on Iran, the UK is quietly pressing ahead with development of a new AI-based military targeting system.

This should be a worry for us all. Besides the gigantic cost – up to £1bn handed to death firms – the ethics and effectiveness of handing the killing over to AI systems are highly dubious.

The UK is developing a system (typically and idiotically) named ASGARD, a reference to Viking mythology. The tender notice states:

This Open Framework will focus on the ‘Decide’ element of the target acquisition cycle (Sense-Decide-Effect); supporting ASGARD’s goal of reinventing, and transforming, how land forces deliver operational decision-support and decision-making software via the use of modern Artificial Intelligence / Machine Learning (AI/ML) technologies.

Gaza is a testing ground for AI

Drone Wars’ Chris Cole said:

Advertisement

While militaries are keen to use AI to speed up decision making around lethal strikes, there are serious ethical and legal concerns about these developments, with increasing evidence that ratcheting up the number of strikes leads to greater danger for civilians.

Drone Wars’ tone is urgent to say the least:

As we have said before, the grave dangers of introducing AI into warfare and in particular for the use of force are well known.  While arguments have been made for and against these systems for more than a decade, increasing we are moving from a theoretical, future possibility to the real world: here, now, today.

The horrifying nature of autonomous war systems is hardly a mystery in 2026. Israel’s genocide in Gaza has been fuelled with AI tools like Lavender and the grotesquely named Where’s Daddy, which is:

used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.

There is mounting evidence AI targeting has shaped the US-Israeli attack on Iran too. The Guardian said on 3 March:

Academics studying the field say AI is collapsing the planning time required for complex strikes – a phenomenon known as “decision compression”, which some fear could result in human military and legal experts merely rubber-stamping automated strike plans.

The US also reportedly used AI in the 3 January attack on Venezuela.

Advertisement

US-Israel attacked Iran first on 28 February without provocation. Iran was offering unprecedented concessions in negotiations at the time. The Pentagon has since stated there was no imminent threat from Iran. And the UN’s atomic watchdog, the IAEA, has said there is no evidence Iran was developing a nuclear weapon.

The UK government is racing to catch-up with its allies in the US and Israel. There is ample evidence that AI targeting is, at best, deeply flawed. It’s increasing use by indifferent imperial powers – seemingly concerned more with speed and a deadly numbers game – has already produced horrific results for targeted populations in Gaza, Iran, and Latin America.

Featured image via the Canary

Advertisement

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version