The global average cost of a data breach fell to USD 4.44 million in 2025, a 9 per cent drop and the first decline in five years, according to IBM’s Cost of a Data Breach Report. On the surface, that looks like progress. Security AI and automation are finally paying dividends, compressing detection timelines and trimming investigative overhead.
But the headline number obscures a more uncomfortable reality. Organisations with extensive automation reported breach costs nearly USD 1.9 million lower than those relying on manual processes. The gap between leaders and laggards is not closing – it is widening. And the very AI tools driving those savings are introducing a new category of risk that regulators, insurers and boards can no longer ignore.
The automation paradox
Security operations centres have embraced AI with the urgency of an industry running out of analysts. Burnout-driven churn rates exceed 25 per cent annually in many SOC teams, among the highest in IT. Replacing a trained analyst typically takes six to twelve months. The maths is brutal: organisations cannot hire their way to resilience.
Automation was supposed to solve this. And in narrow, well-defined workflows, alert triage, log correlation, repetitive enrichment tasks – it has. The Nextgen 2025/2026 Cybersecurity Trends Report estimates that industry telemetry in 2025 reached 308 petabytes across more than four million identities, endpoints and cloud assets, producing nearly 30 million investigative leads. Analysts confirmed only around 93,000 genuine threats from that mountain, a hit rate of just 0.3 per cent. Without automation, the volume alone would be unmanageable.
Yet Gartner’s 2025 Hype Cycle for Security Operations places AI SOC agents at the Peak of Inflated Expectations, warning that claims still outpace sustained, measurable improvement. Initial adoption frequently adds work before it reduces it. False positives and hallucinations remain genuine operational risks. And cost models often limit broad deployment across SOC roles.
The paradox is clear: organisations need AI to cope with the data flood, but ungoverned AI introduces the very blind spots it was meant to eliminate. IBM’s 2025 report found that shadow AI, staff using unsanctioned generative AI tools to process sensitive data, added an average of USD 670,000 to breach costs where present. A staggering 97 per cent of breached organisations that experienced an AI-related security incident lacked proper AI access controls. Meanwhile, 63 per cent of surveyed organisations admitted they have no AI governance policies in place at all.
The implication is stark. Automation without governance does not reduce risk, it redistributes it. And in a regulatory climate that increasingly demands transparency, ungoverned AI in the SOC is not just a technical liability. It is a compliance exposure.
When alert fatigue becomes a breach vector
The human cost is measurable, and it extends well beyond budget lines. Studies cited in the Nextgen report show SOC teams routinely ignore or dismiss up to 30 per cent of incoming alerts – not through negligence, but necessity. When every alert looks the same and context arrives fragmented across disconnected consoles, skilled analysts are forced to triage by instinct rather than evidence.
The consequences vary by sector, but the pattern repeats. In healthcare, still the costliest industry for breaches at USD 7.42 million per incident and 279 days to contain – alert fatigue is not merely an IT problem. ENISA’s dataset of 215 healthcare incidents between 2021 and 2023 found that 54 per cent involved ransomware, with patient data the primary target in 30 per cent of cases. Hospitals have reported diverted ambulances and delayed surgeries directly tied to stretched staff and clogged detection pipelines.
In manufacturing and energy, where NIS2 enforcement began in 2025, a single day of downtime at a high-throughput plant can cost millions of euros. Adversaries increasingly target industrial control systems by pivoting through poorly segmented IT networks, exploiting exactly the kind of ambiguous, context-dependent alerts that overwhelmed analysts tend to dismiss.
The financial data reinforces the point. Breaches contained in under 200 days averaged USD 3.87 million in 2025, while those stretching beyond that threshold averaged USD 5.01 million. Multi-environment incidents, spanning cloud, SaaS and on-premises infrastructure simultaneously, were costlier still, averaging USD 5.05 million with lifecycles approaching 276 days. The operating environment dictates complexity, and complexity dictates cost.
The lesson from 2025 is that sheer data volume will only increase, but the teams that succeed are those treating correlation and enrichment as architectural necessities rather than optional add-ons.
Europe’s regulatory convergence
Three regulatory frameworks are now converging on a single demand: prove resilience continuously, not just report it after the fact.
The Digital Operational Resilience Act (DORA), which came into force across the EU in January 2025, reframes cybersecurity for financial services around operational resilience during severe IT disruptions. Its reporting requirement is the most disruptive element – institutions must submit incident reports within hours, backed by forensic, audit-grade evidence. Logs must be digitally signed and time-stamped to survive regulator scrutiny months later.
The NIS2 Directive, transposed into national law across Europe in 2024–2025, expanded the regulatory perimeter from seven sectors to eighteen essential and important sectors. In Romania, it was transposed as Law 124/2025, explicitly naming manufacturing as a regulated sector for the first time, forcing production facilities to adopt compliance frameworks on par with hospitals and banks. Under NIS2, boards of directors are directly accountable, with penalties including fines and disqualification from holding directorships in the EU.
And then there is the EU AI Act, whose most substantive obligations take effect on 2 August 2026. High-risk AI systems, a category that encompasses many security automation tools, will need to demonstrate compliance with requirements around risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness and cybersecurity. Providers must implement technical measures against data poisoning, model evasion and adversarial attacks.
For global financial groups, the complexity multiplies. A single breach may require simultaneous reporting under DORA, GDPR and national frameworks, each with different formats and deadlines. For manufacturers newly brought under NIS2’s scope, the challenge is even more fundamental: many lack the tooling infrastructure to produce compliance-grade evidence at all, let alone under time pressure.
Together, these three frameworks create a regulatory environment where cybersecurity AI cannot simply be effective – it must be auditable, explainable and governed. The question organisations face is no longer “how secure are we?” but “can we demonstrate it to regulators within hours?“. For organisations evaluating platforms built for this regulatory environment, a recent comparison of European SIEM vendors provides additional context.
The case for governed autonomy
This regulatory convergence is reshaping what good security architecture looks like. The industry is shifting from rule-based automation – where playbooks execute predetermined steps, toward what might be called governed autonomy: semi-autonomous SOC operations with built-in compliance guardrails.
In a governed autonomy model, AI does not replace human judgement. It narrows the decision space. Correlation happens at ingestion, collapsing dozens of fragmented alerts into a single enriched case with full audit evidence.
UEBA scoring ranks anomalous identities and assets by risk, so analysts focus on what matters rather than wading through noise. And every investigation timeline doubles as a compliance artefact, digitally signed, framework-mapped and ready for regulator export.
The architectural principle is lean: every security case is simultaneously a compliance case. Analysts investigate once, and the system produces both operational outputs and regulator-ready reports. This avoids the duplication that plagues organisations running separate SIEM, SOAR and compliance tools, each adding cost, latency and integration effort.
European platforms are increasingly built around this philosophy. Romania-based Nextgen Software, for example, designed its CYBERQUEST platform to unify detection, investigation and compliance reporting within a single workflow, so that every enriched case automatically generates the audit trail DORA and NIS2 demand. Its agentless OT monitoring module addresses a gap that matters for manufacturers and utilities: visibility into industrial control systems without deploying intrusive endpoint agents. Similar convergence efforts are visible across the European vendor landscape, from Nordic SIEM providers building compliance-ready exports to German-led initiatives embedding ISO 27001 and NIS2 mappings directly into detection logic.
From assistants to agents – carefully
The next frontier is the move from AI assistants to AI agents systems that do not merely suggest next steps but actively execute detection, investigation and response workflows. It is a transition the industry is approaching with a mixture of ambition and caution.
Vlad Gladin, CTO of Nextgen Software, describes this evolution in practical terms: “Our Cyber Minds AI Personas are evolving from advisory assistants into context-aware investigation agents. Rather than simply recommending a response, these agents will be able to correlate telemetry across identity, network and endpoint data in real time, conduct preliminary forensic analysis, and present analysts with an enriched investigation narrative, not a queue of disconnected alerts. The goal is not to remove the analyst from the loop, but to ensure that when they engage, the context is already assembled.”
This mirrors the broader industry trajectory. Gartner recommends treating AI SOC agents as workflow augmentation tools rather than autonomous replacements, with strong emphasis on maintaining human oversight. The concern is legitimate: over-automation introduces risk if agents act on flawed assumptions, and most current use cases remain narrow and task-specific rather than end-to-end.
The governed approach means building trust incrementally. Start with automated enrichment and case assembly. Layer in UEBA-driven prioritisation. Only then extend to semi-autonomous response actions – and always with audit trails that a regulator or insurer can verify after the fact.
There is a reason this incremental model resonates particularly in Europe. The continent’s regulatory landscape rewards demonstrable control over raw capability. An AI agent that can triage a thousand alerts per hour is impressive; an AI agent that can triage a thousand alerts per hour and produce a DORA-compliant incident timeline for each one is bankable. The commercial logic and the regulatory logic are converging on the same architectural requirements.
What 2026 demands
The organisations best positioned for 2026 are not necessarily those with the most advanced AI, but those that can prove their AI is trustworthy. In a landscape where DORA demands forensic evidence within hours, NIS2 holds boards personally liable, and the EU AI Act requires demonstrable governance of high-risk systems, the real differentiator is not speed of detection but speed of demonstrable trust.
This means compliance cannot remain a bolt-on exercise performed quarterly by a separate team. It must be embedded in the detection-to-resolution workflow, generated automatically as a by-product of incident handling. Platforms that deliver audit-ready evidence as a natural output of operations, rather than requiring analysts to reconstruct it after the fact, will set the new standard.
The cybersecurity industry spent the past decade racing to automate. In 2026, the race shifts to governing that automation, proving to regulators, insurers and boards that the machines defending the network are themselves accountable. The winners will not be the organisations with the most AI. They will be the ones whose AI can show its working.