Tech

Anthropic’s Mythos a game changer, NCSC chief tells Oireachtas

Published

on

‘We’re in a race whether we choose to accept it or not,’ Richard Browne said.

Mythos’ creation shows just what is possible with AI tools in the cybersecurity space, National Cyber Security Centre (NCSC) director Richard Browne told the Oireachtas Joint Committee on Artificial Intelligence this afternoon (14 April).

“The issue is not that that Anthropic has created this. The issue is that Anthropic has demonstrated that this is possible,” Browne said in response to questions posed by Social Democrats TD Sinéad Gibney, who said Anthropic is engaging in a “PR exercise”.

“This technology exists and it’s possible to use it. [Currently] it’s in the hands of a company. In five months – six months – it’ll be in the hands of an active state [actor], Browne said. “Governance is great, very important, but it doesn’t stop criminal actors.”

Advertisement

Anthropic launched Mythos earlier this month to a select group of top companies globally. At its launch, Anthropic noted Mythos’ abilities to detect and generate exploits at a much faster rate than its competitors.

The company, concerned about bad actors, chose instead to let businesses boost their cyber defences using the tool. In the days since its launch, US, UK and Canadian leaders have already expressed their concerns.

NCSC, in a public statement yesterday (13 April) said that Mythos appears to represent “a significant change in the way hardware and software vulnerabilities are identified”.

Anthropic’s decision to restrict the release of the model and to work collaboratively with industry partners is a responsible approach,” it added.

Advertisement

AI is having an “inherently unpredictable” impact on cybersecurity, Browne told the Committee. He noted that AI is “genuinely revolutionary” and poses a “generational change” that is set to affect every other digital technology.

The question is no longer whether AI needs to be adopted, but rather how to do so safely, he added.

The primary use case of the technology has been as a “force multiplier”, Browne said. It allows users to increase the scale of their operations – effectively “democratis[ing]” access by removing technical and linguistic barriers. This allows relatively novice users to utilise commercial AI tools to deploy attacks.

Threat actors are already “heavy users” of AI tools, Browne said, while on the other side, security workers are also employing agentic AI to boost their defences.

Advertisement

The National Cyber Risk Assessment launched in December outlines how AI drives systemic risk by increasing the speed, scale and sophistication of cyberattacks.

“We’re in a race whether we choose to accept it or not,” Browne said. “The technical frontier is moving ahead week by week, and the role of managing cyber-related risks to society and to the economy is becoming far more dynamic.” Look at AI as a tool, a threat and a target, the director added.

The speed at which AI models are growing also gives way for an “AI gap”, leaving states that are unable to adapt behind. Security can no longer be an afterthought, no matter how promising an AI system might be, Browne said.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

You must be logged in to post a comment Login

Leave a Reply

Cancel reply

Trending

Exit mobile version