Politics
Predictive policing from any government will be a disaster
On 12 February, the Ministry of Justice announced plans to use predictive policing to overhaul the youth justice system. Tucked away in the 25-page document was a proposal to use “machine learning and advanced analytics” to “support early, appropriate intervention” in youth crime.
Whilst the white paper was vague on the particulars, only promising further news in the spring, a Times article went into greater detail on the plans. Beneath an inflammatory headline promising machines that would predict “the criminals of the future”, the column explained that:
Artificial intelligence (AI) could be used to predict the criminals of the future under government plans to identify children who need targeted interventions to stop them falling into a life of crime. […]
Academic research has found patterns can emerge from data collected by health visitors checking on newborn babies, although it has not been decided whether the government programme would go back so far to determine whether someone was at risk.
Now, it would be easy here to point out that this pre-crime policing is horrifyingly dystopian. It sounds like a crude mashup of phrenological skull-measuring and Minority Report.
And that’s true, it is horrifyingly dystopian. But it’s also a present reality that racialised individuals in the UK have been subject to for decades.
Predictive policing and ‘criminals of the future’
Regarding the AI plans, a government source stated that:
We are looking at how we can better use AI and machine learning to essentially predict the criminals of the future, but to do so ethically and morally. It’s about ensuring the data from the NHS, social services, police, Department for Work and Pensions and education is used effectively, and then using AI so you can go above and beyond what we can currently do.
This is going to be pretty transformative on how we put money and resources into prevention. We keep getting the same profiles of criminals in the justice system but we’re intervening far too late.
This isn’t about criminalising people but making sure the alarms in the system are better understood and data and AI modelling can do that much better.
Minister for youth justice Jake Richards explained further:
I’m determined to harness the power of artificial intelligence and machine learning to gain better insights into the root causes of crime. This will allow us to focus on the earliest of interventions for individuals and families, offering better outcomes for children and keeping our communities safer.
But we must hold and use this personal data carefully, and that’s why I’ve commissioned this specialist expert committee to look at the efficacy of this work, but also the ethical and legal consequences.
The Times goes on to state that data show that neurodivergent, poor, and ethnic minority kids are more likely to commit crimes. Four in every five children in youth detention are neurodivergent. Before they’re even 18, 33% of kids with a care background receive a police caution.
The article states all of this that neutral tone that only the discerning bigot’s newspaper of choice can manage. And, of course, it’s a deeply misleading abuse of the truth.
Biases past and biases future
In reality, these marginalized kids are the ones who are more likely to be picked up by police, cautioned, or prosecuted. Police profile their arrestees – they have a (racist, discriminatory) idea of who a criminal is, and then police people accordingly. And surprise surprise, the people treated as criminals keep getting arrested.
That’s a world away from being “more likely to commit crime”.
Whilst AI decision-making is sometimes perceived as unbiased and emotionless, this couldn’t be further from the truth. Rather, it simply hides the – very human – biases in its training dataset behind a veneer of cold ‘fairness’.
In her report on AI biases in policing, the UN’s Ashwini K.P. – special rapporteur on contemporary forms of racism – specifically called out predictive policing. Back in 2024, Ashwini explained that:
Predictive policing can exacerbate the historical over policing of communities along racial and ethnic lines. Because law enforcement officials have historically focused their attention on such neighbourhoods, members of communities in those neighbourhoods are overrepresented in police records. This, in turn, has an impact on where algorithms predict that future crime will occur, leading to increased police deployment in the areas in question. […]
When officers in overpoliced neighbourhoods record new offences, a feedback loop is created, whereby the algorithm generates increasingly biased predictions targeting these neighbourhoods. In short, bias from the past leads to bias in the future.
Pre-crime criminalisation
However, as I mentioned earlier, this feedback loop isn’t a problem specific to AI itself. Rather, it’s inherent to the very idea of pre-crime policing – and it’s an oppression that racialised individuals in the UK have been dealing with for decades.
Take, for example, the Met Police’s ‘Operation Trident’ of the 1990s. This sought to prevent gang-related violence in London, and instead resulted in the mass racial profiling of Black youth. An Amnesty International report on Trident’s ‘Gangs Matrix’ database stated that:
The type of data collection that underpins the Gangs Matrix focuses law enforcement efforts disproportionately on black boys and young men. It erodes their right to privacy based on what may be nothing more than their associates in the area they grow up and how they express their subculture in music videos and social media posts. Officials in borough Gangs Units monitor the social media pages and online interactions of people they consider to be ‘at risk’ of gang involvement, interfering with the privacy of a much larger group of people than those involved in any kind of wrongdoing.
Later, in 2003, the UK government created the Prevent counter-terrorism strategy. Ostensibly, it seeks to prevent people from being radicalised into extremist ideologies. In reality, it disproportionately targets Muslims – including Muslim children – for surveillance and hostile treatment as a dangerous ‘other’.
Then, in 2023, the Shawcross Review of Prevent baselessly claimed that the strategy should target Muslims to an even greater degree, rather than far-right extremism. In itself, this was a perfect microcosm of bias-confirmation in action. At the time, the Canary’s Maryam Jameela wrote that:
Pre-crime strategies like Prevent presume full agency and power at all times, for all Muslims. In order for such a thing to happen, there needs to be a cultural belief that Muslims are figures of suspicion because they always hold the potential to be terrorists. Underpinning this presumption is that Islam itself harbours something sinister. Repeated governments have, over the years, created a culture of criminalisation that only views Muslims as being in a constant state of pre-crime.
Now, and for all Jake Richards’ protestations that his AI plans will use data ethically to create better outcomes for children, it certainly sounds like more of the same discriminatory dross. We’ve seen already what these people’s ethics and care look like.
There is no way to predict criminality that isn’t driven by our previous biases – machine learning or not. All that this ‘new’ strategy can do is push yet more marginalised youth into the no-man’s-land of pre-criminality. And all the while, vulnerable kids will be shown directly that their every move was always already under scrutiny.
Featured image via the Canary