In just a few years, generative artificial intelligence (gen AI) has brought about significant changes in many industries from healthcare to education, entertainment to finance, and even law.
The use of gen AI in court verdicts poses significant risks to justice. Erroneous outcomes generated from “hallucinated” information, discriminatory decisions and lack of transparency are all concerns when this technology is introduced to courtrooms.
But already a number of judges around the world have used it in decision-making and judgment writing. This is why some jurisdictions, including the UK, have issued guidelines for judges regarding AI use.
À lire aussi :
‘Hallucinated’ cases are affecting lawyers’ careers – they need to be trained to use AI
Broadly, the guidelines suggest judges might use AI as a tool to conduct preparatory works such as drafting summaries of long documents, translating legal documents, identifying legal precedents or enhancing readability of documents. They recommend against the application of it for core judicial functions, including decision-making.
Recently, some senior judicial leaders have opined that AI might be used to decide “low-stakes” or less-complex cases with adequate precautions, such as keeping a human judge in the loop.
In a November 2024 speech, the UK’s second most senior judge, Geoffrey Vos, spoke of a “spectrum” of legal decisions that AI might soon make, or help make.
Vos said the use of AI for “broadly mechanical decisions, like those about the amount of a pension or benefits, or the calculation of personal injury damages and loss of earnings” would likely save money and time. But he called for discussion on whether such use would violate essential human rights.
A year later, Vos again called for “serious debate” about what rights humans should have protected in this context. And he urged that AI be “used responsibly, effectively and safely in legal systems and processes”.
AI has long been discussed as a threat to jobs and livelihoods. But what’s the reality? In this new series, we explore the impact it is already having on different occupations – and how people really feel about their AI assistants.
A number of jurisdictions are testing or using AI in such “mechanical” cases already. Estonia uses a semi-automated small-claims system in civil proceedings for monetary claims up to €7,000 (£6,100), with human clerks overseeing the process.
Frankfurt District Court in Germany has tested an AI system named Frauke to deal with air passenger rights lawsuits. Frauke analyses earlier cases and rulings to create pre-configured draft judgments. Judges assemble final verdicts from these texts following their ruling, significantly reducing the time spent drafting.
Taiwan piloted an AI-powered tool to assist courts by producing ruling notices for Driving Under Influence cases, or aiding and abetting in fraud cases. The AI system generates a complete draft ruling including the facts, legal reasoning, citations and final verdict. The judge reviews this draft and, upon approval, can issue it as the official judgment, with or without modifcations.
It is evident from these examples that the key motivation to replace human judges in a certain category of cases is efficiency. As a result, a few other jurisdictions are also exploring the scope of integrating gen AI to adjudicate certain litigation without human judges.
The cost of using gen AI as judge
Courts are overburdened, and technology like gen AI promises consistency and efficiency. But it would mark a significant change of centuries-old practice. And it risks undermining what some legal scholars argue is a fundamental principle of justice: the right to be judged by a human being.
Court adjudication is not only about reaching a decision. It is about a holistic and fair process that includes the right to be heard – presenting defence, weighing competing narratives, and exercising judgment in light of law and equity.
Algorithmic tools, no matter how advanced, do not hear or “understand” even their own output, let alone human values or changing social contexts. Gen AI cannot recognise suffering, credibility, remorse or vulnerability like a human. That alone makes it unfit to sit in a judge’s seat.

Korawat photo shoot/Shutterstock
Categorising cases as simple or complex may look pragmatic, but it is both legally and morally dangerous. What counts as a “simple, routine or mechanical” case is itself a human decision. Legal disputes over compensation or benefits may appear straightforward on paper, yet carry significant consequences for the person bringing the case.
Allocating such cases as appropriate for algorithmic adjudication risks creating a two-tier justice system – in which one group of citizens gets to present their case before a human judge, while others are handled by machines. Only the former, I would argue, are exercising their right to a fair hearing and trial before an independent and impartial tribunal, as protected under Article 6 of the European Convention on Human Rights.
Additionally, the efficiency argument may become illusory. Algorithmic systems like gen AI require continuous human oversight, auditing and rectification. Hallucination or mistakes, whether from flawed design or biased training data, can completely negate the claimed benefits.
Public trust matters in all legal systems. If people lose trust in automated decisions, appeals will increase – adding to the existing backlog of cases.
Emerging technology such as gen AI may be suitable to manage court administration and reducing clerical burdens. But substituting human judges, even in supposedly low-stakes cases, undermines basic principles of justice. Efficiency should not come at the expense of the values the justice system exists to protect.



You must be logged in to post a comment Login