Politics
AI policing: making racism more tech-laden
A police chief in charge of AI use has admitted that a new £115m national police data centre will produce biased and racist results. However, he also claims that police will try to mitigate that risk.
Which is all fine then, given that the police have done such a good job of combating their own racism so far.
Alex Murray – the National Crime Agency’s threat leadership director, and the national lead for AI – said:
Once you’ve recognised and minimised [bias], how do you train officers to deal with outputs to ensure that it is further minimised?
If you talk about live facial recognition or predictive policing, there will be bias, and you need to get in the data scientists and the data engineers to clean the data, to train the model appropriately, and then to test it.
There is no point releasing something to policing that has bias in it that’s not recognised, and everything should be done to minimise it to a level where it can be understood and mitigated.
AI use: ‘lack of meaningful oversight’
Labour have recently called for a massive expansion in the use of AI in policing. This has already been criticised in an early day motion tabled in parliament, with signatories including Labour MP Jon Trickett. The motion voices alarm at the creation of:
a surveillance framework resembling a panopticon, with asymmetric state power exercised over the population without adequate statutory safeguards or democratic consent; recognises widespread concern from civil liberties organisations regarding misidentification, algorithmic bias and lack of meaningful oversight; and calls on the Government to halt the rollout of live facial recognition and AI policing technologies
Part of the government’s initiative includes building a new national AI data centre, which will cost the public an eye-watering £115m. The centralised data centre would replace the current system, in which individual forces make their own decisions on AI in policing. Critics argue that this framework is slow and wasteful.
AI racism: they know, they’re doing it anyway
Alex Murray claimed that the centre would try to reduce bias, and to determine which private suppliers’ products work best. However, the police have form for neglecting to reduce or act on bias in their AI policing already. On the failure of a previous police venture in facial recognition technology, the Association for Police and Crime Commissioners (APCC) stated that:
System failures have been known for some time, yet these were not shared with those communities affected, nor with leading sector stakeholders.
Furthermore, Murray also argued that a human police officer will always have to make any final decisions on what to do with an AI tool’s results.
This will, of course, come as absolutely no reassurance to anybody who knows that UK police forces have known about their own systematic racism and bias for decades and completely failed to address it.
Reproducing human bias
On the use of ‘machine learning’ in policing, I previously wrote that AI decision-making is sometimes perceived as unbiased and emotionless. However, this couldn’t be further from the truth. Rather, it simply hides the – very human – biases in its training dataset behind a veneer of cold ‘fairness’.
In her report on AI biases in policing, the UN’s Ashwini K.P. – special rapporteur on contemporary forms of racism – also called out predictive policing. Back in 2024, Ashwini explained that:
Predictive policing can exacerbate the historical over policing of communities along racial and ethnic lines. Because law enforcement officials have historically focused their attention on such neighbourhoods, members of communities in those neighbourhoods are overrepresented in police records. This, in turn, has an impact on where algorithms predict that future crime will occur, leading to increased police deployment in the areas in question. […]
When officers in overpoliced neighbourhoods record new offences, a feedback loop is created, whereby the algorithm generates increasingly biased predictions targeting these neighbourhoods. In short, bias from the past leads to bias in the future.
Similarly, AI algorithms have also shown deep racial bias in its ability to recognise human faces. A study by the National Physical Laboratory (NPL) on the use of facial recognition technology in the national police database turned up more incorrect matches for Black and Asian people than white people.
Darryl Preston – the APCC’s forensic science lead – said:
The discovery of an in-built bias in the police national database’s retrospective facial recognition system, even if only in limited circumstances, demonstrates the need for independent oversight of these powerful tools.
It is not acceptable for technology to be used unless and until it has been thoroughly tested to eliminate bias. That clearly was not the case in this instance.
Both Labour and the police themselves know – and have been reminded repeatedly – that AI policing produces discriminatory results. The problem is that they just don’t care.
They’ll continue pressing forward, with weak promises to ‘mitigate’ the bias. Then, when AI policing inevitably reproduces human bigotry against real people, we’ll get the same playbook we’ve seen before. A decade-late half-apology, sworn promises to reform and do better, and then more of the same old bigotry.
Featured image via the Canary