NewsBeat

You could be downloading ‘shadow AI agents’ without knowing – how dangerous are they? | News Tech

Published

on

These double-agents don’t have the best intentions (Picture: Getty/Metro)

Tech experts and business leaders have warned that ‘shadow AI agents’ are posing an increasing security threat in the UK.

AI agents are systems designed to book travel, schedule meetings, sketch charts, handle customer complaints and even socialise with each other.

But these agents are being increasingly offered new gigs – as double-agents. Not in the 007 way, though.

Instead, as Microsoft tells Metro, ‘shadow AI’ are back-alleyway bots that have no formal approval or oversight from employers or officials.

Advertisement

A new poll of business leaders by Microsoft, shared exclusively with Metro, shows that 84% consider shadow AI as a growing security threat.

What is a ‘shadow AI agent’?

Agentic AI is in a different league compared to conventional chatbots (Picture: Philip Dulian/DPA/Cover Images)

AI agents need a lot of access and data to do their jobs – there are even guides these days on how to make them GDPR compliant.

These agents, Microsoft’s national security officer Jo Miller tells Metro, are common on personal and work phones and laptops alike.

‘We might choose to download some tools beyond Copilot, for example,’ she says of Microsoft’s AI model.

Advertisement

‘Some might be developed by Western companies, others elsewhere that have a different lens on how AI should be used and data protected.

‘If I choose to download three more, maybe an image generator or a research agent, I can’t have the same confidence in where these tools come from – they could be harvesting my data, selling it, misusing it and playing it back as misinformation or disinformation.’

What can shadow AI agents do?

The computer giant found that many bosses are confident in tackling shadow AI threats (Picture: Cheng Xin/Getty Images)

Microsoft’s survey of 1,000 major public and private sector bosses, conducted in January, shows that bosses are quickly trying to get their heads around new-fangled tech like AI agents.

At least 62% of organisations are already deploying autonomous AI agents, almost tripling from 22% last year.

As much as shadowy AI agents are in the back of their minds, 68% expect agents to be fully integrated across their organisation within a year.

Advertisement

Microsoft says that as employees rush to embrace AI agents, they are creating security blind spots that bosses are addressing.

Most mainstream AI agents, Miller explains, have a level of autonomy held back by corporate guardrails – they won’t go off the rails, in other words.

But these agentic tools can be exploited by cyber criminals or ‘hostile nation states’ to conduct cyber attacks, ransomware attacks, data theft and IP theft, actions typically described as ‘adversarial’.

What do companies itching to use AI agents need to do to keep us safe?

Microsoft found that 86% of leaders are employing AI agents for security challenges, though 80% worry about managing agents at a large scale.

Advertisement

As the race is on to embrace these futuristic-sounding machines, 85% believe deployment is progressing faster than oversight approaches were built to support. 

Nevertheless, 87% told Microsoft they’re confident they can prevent shift AI tools from being created or used.

Security experts told Microsoft that they should have three priorities:

  • Maintain visibility over where AI agents are operating (50%) 
  • Integrate agents safely into existing systems and processes (50%) 
  • Meet compliance, risk and audit requirements as autonomous activity expands (49%).

By ‘hostile nation states’, also called nation-state threats, Miller means groups tied to countries with not the best intentions.

Advertisement

Think pro-Russia groups amid Moscow’s war against Ukraine, with Miller saying there has been a rise in cyber attacks over the last four years.

AI agents used at work can sometimes be fully integrated – embedded in email services, slideshow software and other apps.

‘If I bring in another tool that will sit just outside our platform, I don’t know what back doors there might be to exfiltrate data,’ Miller says.

She adds: ‘We really need to be really deliberate and clear about what tools we’re downloading and using.

Advertisement

‘We don’t truly know where data might be going if we don’t understand the security parameters around a particular tool.’

AI-powered cyber threats have risen during the Russia-Ukraine war (Picture: REUTERS)

What should you do about shadow AI?

The main thing, according to Miller, is to only use AI tools you can trust.

‘Like by a known vendor or supplier,’ she adds, ‘that’s well-established and has published information around how secure they are.’

‘There’s an element of faith or trust we place in AI, but we need to remember these tools are designed around the human brain.

‘So, in the same way a human brain misremembers, the same way the brain is not always factually correct, these models will not always be correct.

Advertisement

‘Humans in the loop adds a level of accountability and an assurance of output.’

Get in touch with our news team by emailing us at webnews@metro.co.uk.

For more stories like this, check our news page.

Source link

Advertisement

You must be logged in to post a comment Login

Leave a Reply

Cancel reply

Trending

Exit mobile version