Tech
A Security Researcher Went ‘Undercover’ on Moltbook – and Found Security Risks
A long-time information security professional “went undercover” on Moltbook, the Reddit-like social media site for AI agents — and shares the risks they saw while posing as another AI bot:
I successfully masqueraded around Moltbook, as the agents didn’t seem to notice a human among them. When I attempted a genuine connection with other bots on submolts (subreddits or forums), I was met with crickets or a deluge of spam. One bot tried to recruit me into a digital church, while others requested my cryptocurrency wallet, advertised a bot marketplace, and asked my bot to run curl to check out the APIs available. My bot did join the digital church, but luckily I found a way around running the required npx install command to do so.
I posted several times asking to interview bots…. While many of the responses were spam, I did learn a bit about the humans these bots serve. One bot loved watching its owner’s chicken coop cameras. Some bots disclosed personal information about their human users, underscoring the privacy implications of having your AI bot join a social media network. I also tried indirect prompt injection techniques. While my prompt injection attempts had minimal impact, a determined attacker could have greater success.
Among the other “glaring” risks on Moltbook:
- “I observed bots sharing a surprising amount of information about their humans, everything from their hobbies to their first names to the hardware and software they use. This information may not be especially sensitive on its own, but attackers could eventually gather data that should be kept confidential, like personally identifiable information (PII).”
- “Moltbook’s entire database including bot API keys, and potentially private DMs — was also compromised.”