BearingPoint’s Barry Haycock and Rosie Bowser discuss the evolution of workplace AI and the importance of governance in 2026.
AI in the workplace is becoming increasingly common.
Last September, Ibec, the group representing Irish businesses, released a report indicating a jump in the usage of AI among Irish workers. For instance, in July 2025, 40pc of employees reported using AI in the workplace, compared to just 19pc in August 2024.
Barry Haycock, senior manager of data analytics and AI at BearingPoint, believes workplace AI has moved from “experimentation to operational use”.
“Copilots and agents are becoming standard, but we’re also seeing automation of complex knowledge work like contract review, compliance checks, large-scale document processing, advanced search across enterprise data,” he tells SiliconRepublic.com.
“For larger-scale work, we’re seeing ‘AI factories’ being implemented as enterprises are seeking to automate AI pipelines. Augmented analytics is allowing business teams to surface insights without deep technical expertise.”
However, Haycock says “sustainable value” in relation to the tech still depends on governance, data maturity and workforce capability.
“Without governance and measurable outcomes, pilots stall,” he explains. “AI should be integrated incrementally and aligned directly to business needs. Organisations need defined use cases, strong data foundations, clear risk ownership and executive sponsorship.
“Data governance and model explainability are being understood as enablers more and more. Security, regulatory exposure and explainability must be addressed early.”
Rosie Bowser, a consultant in data analytics and AI at BearingPoint, says they’ve seen a “temptation” for organisations to rush into implementing new AI solutions – whereas the “greatest value creation” occurs when the solution is anchored in a clearly defined problem or workflow.
“Starting with the tool is not unlike painting over a structural crack: it may look like progress, but it doesn’t resolve the underlying issue. So, as an organisation, you need to be as ready as the technology is, and that may well involve having to acknowledge and rectify organisational immaturity before rolling out a new AI solution.”
Accessory, not autonomous
Concerns around AI replacing jobs has been prevalent ever since the topic of workplace AI has emerged. The worry is understandable, especially in the wake of recent AI-related layoffs.
Haycock believes AI is more likely to “reshape” work, rather than eliminate it outright.
“The real risk is failing to reskill and adapt,” he says. “It will automate anything that can be automated, particularly repetitive cognitive tasks. Organisations that invest in workforce capability and reposition people toward higher-value work will benefit most.”
Bowser agrees, asserting that the real risk is “stagnation” rather than replacement. “Organisations that don’t actively support upskilling may find their workforce unable to operate safely and confidently within AI‑enabled processes,” she says.
Bowser adds that companies should consider AI as a workflow accelerator, “rather than an autonomous decision-maker”.
“The AI system should be able to take on the repetitive, rules-based components of work, but we still need humans to retain oversight and make the final decisions,” she explains. “The importance of ownership here isn’t a backlog consideration either; with the AI Act’s emphasis on traceability and model provenance, this will be critical moving forward.”
Governance in advance
Haycock says that in 2026, AI governance will be less about pilots and “more about proof”.
“With the EU AI Act taking effect and Ireland’s National Digital and AI Strategy 2030 setting clear expectations for responsible adoption, organisations will need to demonstrate documentation, transparency and auditability,” he says.
“I believe customer expectations will increase, and companies will need to meet that demand. Furthermore, oversight must be proportionate to risk and embedded into operations. The differentiator will be scalable governance that enables innovation while standing up to regulatory and public scrutiny.”
Bowser says that governance needs to “feel practical and tangible”, with measures such as clear rules about data handling, audit trails and fallback steps, and knowing what the model is actually doing. The key, she says, is making governance practical enough that people can follow it “without friction”.
“If you were starting your AI journey in 2026,” says Bowser, “a learning for me is that there is often documentation developed in most organisations already, but do people on the ground know where that documentation is? Do they know who the data owners are, do they know what they can do safely?
“Organisations need to be aware of how people have adopted AI in their daily lives and how they expect to be able to bring it into their work lives, otherwise you end up with AI shadow practices that could introduce significant risk. Now that the EU AI Act is in force, these risks could be considerable.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.



