Politics
The House Article | Children must not be collateral damage in the race to AI
(Dorota Szymczyk/Alamy)
4 min read
AI has the potential to enrich children’s lives, helping them learn in new ways, express their creativity and connect with others across the world.
But on the other side of the ledger, it is already exposing them to dangerous risks and serious harms that are evolving faster than our ability to fully understand them.
That’s why the new measures in the Crime and Policing Act that passed in Parliament last month are so significant – both for what they do and don’t deliver for young people. Making it illegal to possess, create or distribute AI tools designed to generate child sexual abuse material is a vital step forward. Likewise, it’s positive to see government tackling AI ‘manuals’, which instruct offenders on how to use this technology to exploit and abuse children.
This is progress. But it is nowhere near enough. Because AI is not only dangerous when it crosses into criminality – it is also reshaping children’s everyday online lives in ways that are less visible but equally harmful.
We are seeing AI amplify damaging content, distort self image and trap young people in echo chambers. And online abuse has become more scalable and more personal, with AI-generated harassment, impersonation and manipulated images making harm feel more intense and harder to escape.
We hear about this directly when children reach out to our Childline service. One 17-year-old girl said she uses AI to count her calories, to ensure she “stays in a certain bracket”.
And sometimes the harm comes from AI chatbots that simply don’t understand the reality of a child’s life. One boy, aged 16, told Childline: “You have to walk on eggshells around my dad or he’ll snap. Usually, it’s shouting and kicking me. I’m actually scared when I know he’s picking me up from school. I asked AI for advice, and it said, don’t provoke him, ignore it, don’t react – if I stay calm, dad will stay calm. I don’t think I provoke him, sometimes it’s just because he’s got me alone and knows no one else will find out.”
This response is dangerous and wrong. No child should ever be told to manage or minimise an adult’s abusive behaviour. An AI chatbot should be directing a child in distress towards safe, confidential support, to services like Childline, not telling them to stay calm so their abuser stays calm.
I asked AI for advice, and it said, don’t provoke him, ignore it, don’t react – if I stay calm, dad will stay calm
The government deserves credit for tackling the most extreme abuses. But if we want the UK to be the safest place in the world to grow up online, we cannot regulate AI only at the point where it becomes criminal.
For years, the tech giants driving this revolution have behaved as though children’s safety is someone else’s problem. They push out powerful AI systems at breakneck speed, fully aware of the risks, and then hide behind disclaimers when those risks materialise.
These companies have the resources, the expertise and the foresight to build safeguards in from the start, yet time and again they choose not to. It is indefensible that some of the wealthiest corporations on the planet continue to treat children as acceptable collateral in their race to dominate the AI market. That negligence should no longer be tolerated.
Last month, Liz Kendall, the Secretary of State for Science, Innovation and Technology, met with our Voice of Online Youth at Wilton Park. They spoke candidly about what AI means for their friendships, mental health and overall safety. AI is already influencing their lives, and they want a say in how it is governed.
The Crime and Policing Act is an important start. Now the government must match that ambition with a regulatory framework that protects children from the full spectrum of AI-driven harm and makes sure that these services are appropriate.
Because AI will define the next generation’s childhood, and it is our responsibility to ensure it does not also become a tool to exploit their vulnerabilities.
Chris Sherwood is the CEO of NSPCC
You must be logged in to post a comment Login