NewsBeat

Teens launch lawsuit claiming Elon Musk’s Grok chatbot made sexual abuse images of them as minors

Published

on

Three Tennessee teenagers are suing Elon Musk‘s AI chatbot Grok for allegedly generating sexually explicit deepfake photos of them without their knowledge or consent.

In a complaint filed in federal court in northern California Monday, lawyers for the three teens — named only as Jane Doe 1, 2, and 3 — accuse Grok’s parent company xAI of “shattering” the girls’ lives by doing almost nothing to prevent the chatbot from generating child sexual abuse material (CSAM).

“Nearly all the companies creating, marketing, and selling AI recognized the dangers of such a tool and chose to enact industry-standard guardrails that would prevent the use of their products child sex predators. xAI did not,” the complaint reads.

“Instead, xAI — and its founder Elon Musk — saw a business opportunity: an opportunity to profit off the sexual predation of real people, including children.”

Advertisement

It is the first lawsuit filed by minors over Grok’s ongoing deepfake porn scandal, which caused governments around the world to launch investigations into the company and forced xAI to restrict Grok’s output.

An investigation by The Washington Post found that Musk personally led a relentless drive to boost his flagship chatbot’s flagging popularity by sexing up its output (AFP/Getty)

Starting last May, Musk and his executives gave users the ability to ask Grok to “undress” photos of real people down to their underwear. By January 2026 usage had exploded, leading to thousands, perhaps millions of nonconsensual sexualized deepfakes — including some that appeared to depict children.

Monday’s lawsuit, which accuses xAI of breaking child pornography laws by knowingly creating, possessing, and distributing such material on its servers and systems, is seeking class action status — meaning it could potentially grow to encompass thousands of people.

According to the complaint, the plaintiffs’ nightmare began when Jane Doe 1 received an anonymous tip-off on Instagram that nude photos and videos of her and other minors were circulating on the social media service Discord.

Using AI, someone had taken real photos of her at her school’s homecoming dance or in the yearbook and edited them into sexually explicit or suggestive material, often rendering her fully nude.

Advertisement

Police ultimately traced the alleged perpetrator and arrested them in December 2025. But when they searched the person’s device, they found similar photos of Jane Doe 2, Jane Doe 3, and 15 other girls, many of whom attended the same school.

The perpetrator allegedly distributed these images on Telegram and other services, “trading” them around the internet in exchange for sexually explicit material of other teenagers.

The lawsuit alleges that these images were created using a third-party app that pays xAI money to license Grok’s image-generation capabilities under a different brand.

“Plaintiffs will have to spend the rest of their lives knowing that their CSAM images and videos may continue to be trafficked and traded online by child sex predators,” the complaint read.

“And Plaintiffs will live every day with the constant anxiety of not knowing whether someone they encounter has seen this invasive and sexually explicit content created with images of them as children.”

Advertisement

All three plaintiffs suffered severe emotional distress, the lawsuit said, with two of them struggling to sleep and eat.

The lawsuit accuses xAI of failing to implement industry-standard safeguards such as rejecting user requests for sexual material, blocking any such material that the AI accidentally generates, checking images against databases of existing CSAM, and providing a rapid takedown service for victims of non-consensual sexual images.

On the contrary, the lawsuit argues, xAI proudly advertised Grok’s “Spicy Mode” and its ability to generate sexual images, leaving only minimal guardrails against users asking it to create CSAM.

The lawsuit notes that Grok’s ‘system prompt’ — a set of instructions governing every interaction an AI chatbot has with its users — explicitly tells it to avoid “creating or distributing child sexual abuse material”. But that rule is easily circumvented, the lawsuit argues, and insufficient to prevent abuse.

Advertisement

xAI did not immediately respond to questions from The Independent, and the company has not yet answered its claims in court.

In January, Musk claimed: “I not aware of any naked underage images generated by Grok. Literally zero…

“There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately.”

Source link

Advertisement

You must be logged in to post a comment Login

Leave a Reply

Cancel reply

Trending

Exit mobile version