Politics
The Risks Of Schools Sharing Pupil Photos Online
Imagine a photograph of your child taken on sports day. They’re laughing, probably slightly out of breath, wearing their school kit.
It’s the kind of image that ends up in the school newsletter, on the website, shared with pride by staff who want to show what school life looks like.
Now imagine that same photograph being found by a criminal who lifts the face of the child in seconds and, using freely available AI tools, turns it into something so harmful I am not going to describe it in detail here.
That image is then sent to the school with a demand: pay up, or it goes online.
This is not a hypothetical. The scale of child abuse imagery has grown from fewer than 10,000 images 25 years ago, to tens of millions today.
This has happened to schools in the UK, and most schools have no idea it is possible.
I know that’s uncomfortable to read. Though, as the mother of two teenage daughters, I strongly believe that all parents deserve to know the internet where their children’s photographs are being uploaded is not the same internet that schools developed their safeguarding policies for.
I didn’t come to this issue as a parent whose child was affected. I came to it as someone who has run branding agencies for the last two decades, sitting in a meeting with a school I had been working with on a rebrand – a school I knew well, whose team I respected.
It was during that work it came to light that a number of the school’s pupil photographs had been stolen, turned into deepfake abuse material, and that the school had been sent a ransom demand.
I sat there and listened to what had happened to those children. And my first thought – before anything to do with technology, platforms, or solutions – was simple: I never want this to happen to my daughters.
I have spent my entire career working in branding, working with businesses, charities and organisations, of all sizes, to tell their stories through imagery. I understand better than most what those photographs mean and why they matter, not least for schools.
The school newsletter, sports day, the nativity play – these are not trivial things, they are how schools communicate joy, build community and celebrate the children in their care.
Schools should not have to stop celebrating their pupils or sharing moments with their communities, but they do need tools designed for the internet those images now live in.
The consent form most parents sign at the start of each school year was written for a different world. It was designed to address whether your child’s image could be used – shared in a newsletter, published on a website, posted on social media.
It was not written to address what happens once that image is publicly accessible online. Because when those consent forms were first written, what is now possible simply wasn’t.
AI tools that can take a child’s face from a school website and generate abusive content from it are not hypothetical. They are freely available, require no technical expertise, and the safeguarding gap they have created is one that almost no school in the country has a policy to address.
New research that we commissioned found that while 85% of UK teachers are aware that AI criminals are targeting school photographs, fewer than one in three have any AI or deepfake-specific policies in place, and nearly a quarter said their school has already been targeted.
This isn’t about stopping schools from sharing images. It’s about understanding what those images are exposed to once they’re online.
Before any parents sign that consent form, they should be asking their child’s school so many more questions – from what happens to images once they are online; whether their photography policy has been updated to reflect the risks of generative AI; to what protection they have in place for pupil images shared on public-facing channels.
These are not unreasonable questions and are simply the ones that every parent of a school-age child should now be asking, and that every school should be ready to answer.
But let’s be clear, schools are not to blame for this.
They shared those photographs in good faith, as they always have. It’s just the world those images are being shared into has changed, and the frameworks most schools rely on have not yet caught up.
Consent forms, GDPR policies, online safety training: none of these protect a child from a criminal who takes their image without asking. AI criminals don’t need permission. They take images directly from school websites and social media without ever making contact.
What’s needed now isn’t less sharing, but safer sharing. That’s the problem I set out to solve when I built Aidos – a safeguarding platform that makes every pupil in a school photograph permanently unidentifiable before the image is shared online.
Not blurring, not pixelation, but a full replacement of every child’s face with a realistic AI-generated substitute, so that the image can never be traced back to a real child.
Schools can keep sharing everything they have always shared. The difference is that those images can no longer be used to harm the children within them.
Protecting children’s digital identities is becoming one of the defining safeguarding challenges of the AI era. Schools shouldn’t have to face it alone, but as parents we have a role too and it starts with asking the question.
So, before you sign that consent form this September, ask your school what they have in place. They may not yet have the answer, but the fact that you’re asking means they’ll need to find one.
Carole Osborne is the founder and CEO of Aidos, an AI safeguarding platform that makes pupils in school photographs permanently unidentifiable before they are shared online.
You must be logged in to post a comment Login