Tech

Embedding compliance in AI adoption

Published

on

Kyndryl’s Ismail Amla discusses the company’s new policy as code process, and how it can help address AI issues such as agentic drift.

When it comes to AI adoption in enterprise, compliance concerns are becoming ever more important.

According to Kyndryl’s most recent Readiness Report, 31pc of enterprise customers cite regulatory or compliance concerns as a primary barrier limiting their organisation’s ability to scale recent technology investments.

2026 marks an important point on the AI compliance timeline in particular, with the EU’s AI Act transparency rules coming into effect in August.

Advertisement

Last month, Kyndryl announced its new ‘policy as code capability’ – a new process designed for creating policy-governed agentic AI workflows for enterprises.

“Policy as code is the process of translating an organisation’s rules, policies and compliance requirements into machine-readable code, so AI systems are restricted to only operating within pre-defined guardrails,” explains Ismail Amla, senior vice-president at Kyndryl Consult. “Human experts continue to oversee all activities related to these processes.”

Compliant design

“Many organisations, especially those in complex, highly regulated environments, want to scale agentic AI, but are held back by concerns around security, compliance and control”, says Amla.

Speaking to SiliconRepublic.com, he says policy as code can help organisations support “consistent policy interpretations” and define clear operational boundaries, subsequently ensuring agent actions are explainable, reviewable and “aligned with organisational standards”.

Advertisement

Amla also says the framework can help reduce costs, accelerate decision-making, eliminate errors and “power AI-native workflows within defined policy guardrails”.

“By embedding policy and regulatory requirements directly into AI agent operations, policy as code can help organisations execute AI workflows that are governed, transparent, explainable and aligned to business requirements.”

But what about the long-term applications of policy as code?

Amla says the main benefit of the process is “trust through stronger governance, better transparency, lower operational risk and more reliable AI at scale”.

Advertisement

“Managing agentic workflow execution in this way supports controlled and responsible deployment of policy-constrained AI agents in sectors such as financial operations, public services, supply chains and other mission-critical domains, where reliability and predictability are essential,” he explains.

Catch the drift

Over the past year, according to Amla, the biggest change he’s noticed in AI adoption is that organisations are moving beyond proofs of concept and “focusing more seriously on what it takes to make AI work in production and at scale”.

“That means more attention on infrastructure, governance, data quality and organisational readiness,” he says. “Organisations are moving from experimentation to making more strategic decisions with the experience they have gained to drive higher value outcomes and performance for their organisation, and receive a return on their investment.”

But with increased focus on serious AI integrations comes risk, particularly if an organisation is not fully prepared.

Advertisement

Amla warns of something called ‘agentic drift’, which refers to when an AI agent can appear reliable while working toward unwanted outcomes due to a gradual separation from the agent operator’s original intention or goal.

“Agentic drift creates pressing challenges for all organisations, but it is especially acute in the public sector and highly regulated sectors, such as banking and healthcare,” says Amla.

“In these industries, organisations cannot move from pilots to production if issues around control, trust and compliance remain unresolved. It’s clear enterprises urgently need a way to constrain what agents can do at runtime and close governance gaps long before drift leads to financial or compliance failures.”

Amla believes that policy as code can help address this issue, due to its ability to allow businesses to translate their rules and policy into machine-readable instructions that “govern how AI agents reason, adapt and act”.

Advertisement

“This greatly reduces the risk of agentic drift,” he says. “It also alleviates the trust and compliance concerns standing between large enterprises and a return on their AI investments.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement

You must be logged in to post a comment Login

Leave a Reply

Cancel reply

Trending

Exit mobile version