Agustin Huerta discusses Anthropic’s new Code Review feature and the importance of AI governance.
As more and more organisations and professionals utilise technologies that make coding simpler, they potentially also introduce additional dangers, as the speed at which code can now be generated can lead to poor security practices and risky behaviours.
In March, US AI and research company Anthropic launched Code Review, a new feature designed to catch and eliminate bugs before they ever make it into a software’s codebase. A move Globant’s senior vice-president of digital innovation, Agustin Huerta explained is reflective of a “shift in software development workflows as AI tools increasingly begin to own more of the software development lifecycle”.
He told SiliconRepublic.com, “It uses multiple specialised agents to review code for risks and bugs, cross-check amongst one another and prioritise the most relevant issues for reviewers.”
But he noted, while this does help teams to better manage higher volumes of code, it doesn’t replace human reviewers and raises a few concerns of its own when it comes to long-term security and best practice.
Critical coding concerns?
“The concern isn’t that code can write and review itself, but that organisations may assume less oversight is needed,” said Huerta, who elaborated, saying that in reality the same principles that dictate and govern traditional software development remain equally as important when AI agents are involved, if not more so.
“The processes and workflow structures that once governed human coders should be adapted to govern agents, including workflow integration, human review, data readiness and observability. Teams need clear visibility into how code is generated, reviewed and promoted across environments, along with defined checkpoints to validate outputs.”
He said, though agents can carry out a number of tasks, for example assist with, recommend and even execute prompts within a set of defined guidelines, code quality and risk management should remain the responsibility of people who themselves follow a clear process.
He finds that nowadays, too many organisations are electing to delegate tasks, such as debugging and code writing to AI agents, rather than a real employee, amplifying the potential for risk, though it isn’t only AI hallucinations and errors sneaking past the automated workforce.
“A more significant concern is an overreliance and unchecked trust in agent autonomy. Overdependence on agent-driven work without the right checks and balances can create blind spots and amplify small issues into larger problems, such as system outages or security risks.
“For example, version control systems and code repositories are a way to maintain observability over human-written code, supported by structured review processes. When these workflows become automated without incorporating an additional layer of human oversight, organisations risk compounding mistakes and introducing larger structural issues that are harder to detect or resolve.”
He finds, while human involvement is irreplaceable, equally as important, across the development lifecycle, is organisational transparency. “Organisations need visibility into how agents are accessing data, how they’re reasoning and why tasks are deemed complete. This level of observability is key in managing human-agent workflows, identifying areas for growth and maintaining accountability.”
Moreover, when correctly implemented and supervised there are clear and significant benefits.
Enterprising AI
AI agents undoubtedly bring a new element to the workplace, for better or for worse, but there are tangible benefits, such as the ability to boost productivity, minimise laborious, data complex tasks, support developers in the coding process and identify the issues or patterns that are often overlooked by people.
Huerta said, “By taking on repetitive work that was previously handled by people, agents allow teams to focus on higher-value tasks and activities. These benefits are best realised when AI is used as an enhancement, not a replacement, for human judgment.
“The most successful models are a hybrid of human-agent teams, where the speed and scale of AI are combined with human oversight to refine and improve workflows, instead of just automating them.”
A key challenge going forward, he explained, will be in establishing balance between the adoption and implementation of AI agents and blending it seamlessly with responsible use. He said, as agents become more advanced and more capable, organisations risk losing sight of basic best practices in crucial areas such as those that govern software development.
“Leaders must continue to prioritise observability, governance and human-agent collaboration despite pressures to prove ROI from AI systems.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.









You must be logged in to post a comment Login