For most of us, generative AI (GenAI) has moved from novelty to everyday infrastructure astonishingly fast. Many adults now use tools like chatbots at work or casually, and many children are already encountering them through homework “help”, entertainment, or social sharing.
Unsupervised use of generative AI can expose children and young people to confidently presented misinformation, manipulative “keep chatting” dynamics, and inappropriate or emotionally risky content. The tone and conversational dynamics of many chatbots can encourage secrecy and over-reliance, or mimic authority without real understanding or duty of care. In school contexts, GenAI can quietly undermine learning, turning homework and writing into shortcuts rather than skill-building.
I’ve helped create new school resources on GenAI, including guidance for parents. But the most effective safety measures still depend on adults setting boundaries, modelling critical thinking, and staying close enough to a child’s digital life to notice what’s changing in it. What follows are some practical ways to talk about, assess, and limit younger people’s GenAI use.
1. Begin with curiosity – not crackdowns
If you start by telling a child that they shouldn’t use GenAI, you may prompt secrecy about their current and further uses. A better opener could be a simple request to demonstrate to you the AI tools or uses they’re familiar with. Ask what they like about it, what it helps with, and what they’d never use it for. The initial aim should be to normalise discussing AI, though not to normalise unrestricted use.
From here it’s easier to acknowledge that these are powerful and intriguing tools, but not a person or an authority, and not without risks and necessary considerations.
2. Don’t treat stated age limits as optional
An awkward reality that parents may currently have missed is that many popular AI services set 13 as a minimum age (with parental permission under 18). OpenAI states that ChatGPT “is not meant for children under 13”, and still requires parental consent for ages 13 to 18. The AI chatbot ecosystem is inconsistent, however. Anthropic requires Claude users to be 18+, explicitly citing heightened risks for younger users. Google, meanwhile, allows supervised access to Gemini for under-13s via parent-enabled controls.
Your practical rule should be to treat age limits as a clear safety signal rather than a box-ticking exercise. If a service says “13+” or “18+”, that’s telling you something about risk, content exposure and the likelihood of harm from unsupervised use by young people.
3. Encourage fact-checking
Children (and indeed plenty of adults) can mistake confidence for correctness. When talking about GenAI with children, emphasise that AI chatbots can and regularly do “hallucinate”. They invent plausible-sounding details and mix fabrication with fact. Understanding that their speedy and well-stated responses come at a cost of large and small inaccuracies is key.
Pheelings media/Shutterstock
Encourage verifying anything important – news, health claims, law, school facts, statements that may be repeated as “true”.
4. Help them know when to stop
Large language models (LLMs) are designed to keep conversation flowing. They compliment, encourage, reassure and suggest what to do next. This may be helpful for brainstorming but it’s potentially dangerous for emotionally loaded topics where a young person is vulnerable, impressionable, or isolated.
Recent litigation around “companion” chatbots has alleged that vulnerable young users were pulled into harmful spirals, including self-harm risk and secrecy from parents. These are complex and unfolding cases, but they are serious enough to treat as a major warning sign about unsupervised, open-ended AI conversations for minors.
Parents and teachers should name a firm boundary: no chatbot is a counsellor, therapist, or trusted confidant. If a conversation becomes sexual, self-harm related, frightening, or intensely personal, the rule should be to stop and speak to a trusted adult.
5. Don’t feed the machine personal data
Young people often understand privacy better when it’s framed as something tangible. Some rules: don’t share a full name, address, school, phone number, or identifiable photos. Don’t upload private documents or screenshots. Don’t paste in other people’s personal information. If you wouldn’t post it on a public noticeboard, don’t paste it into a chatbot.
6. AI should support the work, not do the work
GenAI poses an educational risk that deserves far more attention: cognitive off-loading. This happens when the tool performs the thinking step – the learner may finish faster, but will learn less. Research is increasingly linking heavier AI reliance with reduced critical thinking and lower cognitive effort, with off-loading and automation bias proposed as mechanisms. A practical way to explain this to young people is that “AI can help you learn, but it can also help you avoid learning”.
À lire aussi :
How generative AI is really changing education – by outsourcing the production of knowledge to big tech
If you’re helping with homework, allow the use of GenAI for asking for an explanation in simpler terms, or requesting feedback on a draft. Don’t allow writing the essay, answering the homework questions directly, or producing a solution that the student can’t explain.
7. Make AI use visible and social
Where AI use is permitted, aim to reduce secrecy. Use AI in shared spaces at home. Set agreed times, not late-night private use. Coordinate with other adults: parents should share their concerns and approaches with other parents and with school staff.
We should treat Generative AI as we wish we’d treated social media much earlier – not as just another app, but as a behavioural technology that shapes attention, learning, confidence and relationships. Being AI aware is not about panic, but about adults building enough knowledge and confidence to guide children toward safe, age-appropriate, genuinely educational use, while regulation and curriculum development catch up.
