BearingPoint’s Karl Byrne, Holly Daly and Fiona Eguare discuss the effects of AI on software engineering and how it has affected graduates in particular.
The widespread integration of advanced AI technology into tech workplaces across the world has transformed working life for many, but especially so for software teams.
“Over the past few years, software engineering has undergone some of the most significant changes I’ve seen in my career,” says Karl Byrne, director and head of software development at BearingPoint Ireland.
“While the industry has navigated the transition to cloud native and DevSecOps, the arrival of generative AI represents a fundamental change in how we conceive, build and secure software.”
Byrne tells SiliconRepublic.com that what strikes him the most is how broad the change is. “It’s not confined to one specialism or team – it’s touching every part of how we deliver software.”
However, he adds that the fundamentals of the area haven’t changed, emphasising that strong technical understanding, sound design principles, and a focus on security and quality “remain as important as ever”.
“If anything, AI has raised the bar, because engineers now need to critically evaluate AI-generated work on top of everything else they do,” he explains.
For graduates, Byrne says, the introduction of AI to the role has spurred a “total evolution” of day-to-day roles.
Responsible use
Holly Daly, a technology analyst at BearingPoint Ireland, says the growing use of AI highlights the importance of using these tools carefully and responsibly – especially for graduates and early-career software engineers.
“While AI can significantly enhance productivity, graduates should avoid becoming overly dependent on it and continue to build on the foundational skills they have developed,” she says. “AI should be used as a supporting tool to improve efficiency and quality rather than becoming a replacement for your own technical understanding and critical thinking.”
She explains that it’s particularly important for a graduate to demonstrate that they understand the solutions they’re delivering and aren’t just reliant on AI.
“From my own experience as a graduate working on an AI-driven project, I’ve had the opportunity to work with several AI tools, testing and recommending them,” she says. “At the same time, I’ve placed a focus on learnings to improve my skillset so that I do not become reliant on AI. This approach has allowed me to benefit from AI, while allowing me to work confidently on my own.”
Daly says that BearingPoint’s graduate programme adapted to AI‑assisted engineering by exposing graduates to AI from the outset and integrating it into both their training and project experiences.
“During onboarding, graduates are given exposure to AI through dedicated talks and interactive sessions, including AI walkthroughs that highlight its capabilities, limitations, and potential use cases. These sessions help build an initial understanding of how AI can support both technical and non‑technical tasks, while reinforcing the importance of responsible usage.”
Fiona Eguare, also a technology analyst at BearingPoint Ireland, says the process of onboarding AI tech into an engineering team has multiple steps – beginning with research and testing.
“We explored the tools available and trialled those that seemed best suited to our needs. This allowed us to compare them, confirm that they fit our use cases, and evaluate the benefits they offered over more traditional tools and methods,” she says.
“Once the most useful tools were identified, we shared our findings across the team and wider company, and we integrated the tools into the project where appropriate.”
Eguare says that while everyone involved was enthusiastic and open to incorporating AI throughout the software development life cycle, it’s very much “an ongoing effort”.
“As the tools continue to develop, it will be essential to keep upskilling and monitoring their security, to ensure that they remain the right fit for us.”
AI-driven changes
Both Daly and Eguare say the inclusion of AI tools in their working life has had some benefits.
“One of the clearest effects for me,” says Eguare, “has been the increase in developer efficiency. With the help of generative AI tools, some of the more tedious and time-consuming development tasks can be completed much more quickly.
“These tools can also be a great help when debugging. While they can sometimes miss the mark on this, some generative AI tools do an excellent job of understanding the context of the project and codebase, making them great at pinpointing the source of bugs.”
Daly has found that tasks such as writing new code, refactoring existing code and debugging errors have become “much faster and more efficient” with the support of AI tools.
As well as the benefits, both also recognise the potential pitfalls of the technology.
Eguare highlights the cybersecurity vulnerabilities of the tech, saying it has made it easier for attackers to exploit vulnerabilities, while Daly says AI has changed the requirements of the role.
“The role is no longer just about writing code, but also about reviewing, validating, and improving AI‑generated work,” says Daly. “Software engineers need to be more intuitive and analytical when assessing whether AI‑suggested code is correct, secure, maintainable, and suitable for the problem being solved. As a result, strong technical understanding and critical thinking are more important than ever.
“Overall, while AI can be an effective productivity booster, it is important that software engineers do not let it take over, as responsibility still lies with them to ensure the final solution meets the required standards.”
Human oversight
What’s remained consistently important in using generative AI tools in software engineering, according to Eguare, is human oversight.
“When working as a team on projects of larger scale and significance, oversight is essential; its importance really can’t be overstated,” she says.
“A lack of oversight can lead to issues, like bloated code or serious vulnerabilities slipping through to production.”
Eguare explains that in order to tackle these issues, it is important to use “high-quality prompts, specifying expectations around quality and security”, as well as testing.
“Alongside traditional testing, tools that specifically address common issues with AI-generated code can be particularly helpful here,” she says. “We also rely on CI/CD pipelines with automated quality and security scanners to enforce consistent standards and catch issues early – especially important when AI accelerates code changes.”
Another issue she highlights is that if too much of a program is generated without human oversight, it can become “quite difficult” for a developer to debug or understand the codebase.
“While AI can also help with this, staying familiar with the structure of the program can help to ensure that the code remains clean, secure, and high quality as changes are made.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.














You must be logged in to post a comment Login