Business
California investigates Grok over AI deepfakes
California’s top prosecutor has launched an investigation into the spread of sexualised AI deepfakes generated by Elon Musk’s AI model Grok.
Attorney General Rob Bonta said in a statement announcing the probe: “The avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking.”
xAI, which develops Grok, has previously said “anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content”.
California’s inquiry comes as British Prime Minister Sir Keir Starmer warns of possible action against X.
In Wednesday’s statement, Bonta said: “This material, which depicts women and children in nude and sexually explicit situations, has been used to harass people across the internet.”
The Democratic prosecutor urged xAI to take immediate action.
California Governor Gavin Newsom, a Democrat, posted to X on Wednesday that xAI’s decision to “create and host a breeding ground for predators… is vile”.
The BBC has contacted xAI for comment.
On Wednesday, Musk posted to X that he is “not aware of any naked underage images generated by Grok. Literally zero.”
“Obviously, Grok does not spontaneously generate images,” Musk wrote. “It does so only according to user requests.”
The tech billionaire, a Republican mega-donor, has also said that critics of X were politically motivated and using the Grok controversy as an “excuse for censorship”.
In November, Wired magazine reported that tools from other AI companies like OpenAI and Google have also been used to digitally undress people.
Last week, three US Democratic senators asked Apple and Google to remove X and Grok from their app stores.
Within hours of the request, X restricted its image generation tool so that it would only be available to paying subscribers.
X and Grok remain available on Apple’s App Store and Google Play.
It comes amid a debate over whether US tech companies are shielded from responsibility for what users post on AI platforms.
Section 230 of the Communications Decency Act of 1996 provides legal immunity to online platforms for user-generated content.
But Prof James Grimmelmann of Cornell University argues this law “only protects sites from liability for third-party content from users, not content the sites themselves produce”.
Grimmelmann said xAI was trying to deflect blame for the imagery on to users, but expressed doubt this argument would hold up in court.
“This isn’t a case where users are making the images themselves and then sharing them on X,” he said.
In this case “xAI itself is making the images. That’s outside of what Section 230 applies to”, he added.
Senator Ron Wyden of Oregon has argued that Section 230, which he co-authored, does not apply to AI-generated images. He said companies should be held fully responsible for such content.
“I’m glad to see states like California step up to investigate Elon Musk’s horrific child sexual abuse material generator,” Wyden told the BBC on Wednesday.
Wyden is one of the three Democratic senators who asked Apple and Google to remove X and Grok from their app stores.
The announcement of the probe in California comes as the UK is preparing legislation that would make it illegal to create non-consensual intimate images.
The UK watchdog Ofcom has also launched an investigation into Grok.
If it determines the platform has broken the law, it can issue fines of up to 10% of its worldwide revenue or £18m, whichever is greater.
On Monday, Sir Keir Starmer told Labour MPs that Musk’s social media platform X could lose the “right to self regulate” adding that “if X cannot control Grok, we will.”
