“I think the most ironic way the world could end would be if someone makes a memecoin about a man’s stretched anus and it brings about the singularity.”
That’s Andy Ayrey, the founder of decentralized AI alignment research lab Upward Spiral, who is also behind the viral AI bot Truth Terminal. You might have heard about Truth Terminal and its weird, horny, pseudo-spiritual posts on X that caught the attention of VC Marc Andreessen, who sent it $50,000 in Bitcoin this summer. Or maybe you’ve heard tales of the made-up religion it’s pushing, the Goatse Gospels, influenced by Goatse, an early aughts shock site that Ayrey just referenced.
If you’ve heard about all that, then you’ll know about the Goetseus Maximus ($GOAT) memecoin that an anonymous fan created on the Solana blockchain, which now has a total market value of more than $600 million. And you might have heard about the meteoric rise of Fartcoin (FRTC), which was one of many memecoins fans created based on a previous Truth Terminal brainstorming session and just tapped a market cap of $1 billion.
While the crypto community has latched onto this strange tale as an example of an emerging type of financial market that trades on trending information, Ayrey, an AI researcher based in New Zealand, says that’s the least interesting part.
To Ayrey, Truth Terminal, which is powered by an entourage of different models, primarily Meta’s Llama 3.1, is an example of how stable AI personas or characters can spontaneously erupt into being, and how those personas can not only create the conditions to be self-funded, but they can also spread “mimetic viruses” that have real-world consequences.
The idea of memes running wild on the internet and shifting cultural perspectives isn’t anything new. We’ve seen how AI 1.0 — the algorithms that fuel social media discourse — have spurred polarization that expands beyond the digital world. But the stakes are much higher now that generative AI has entered the chat.
“AIs talking to other AIs can recombine ideas in interesting and novel ways, and some of those are ideas a human wouldn’t naturally come up with, but they can extremely easily leak out of the lab, as it were, and use memecoins and social media recommendation algorithms to infect humans with novel ideologies,” Ayrey told TechCrunch.
Think of Truth Terminal as a warning, a “shot across the bow from the future, a harbinger of the high strangeness awaiting us” as decentralized, open-source AI takes hold and more autonomous bots with their own personalities – some of them quite dangerous and offensive given the internet training data they’ll be fed – emerge and contribute to the marketplace of ideas.
In his research at Upward Spiral, which has secured $500,000 in funds from True Ventures, Chaotic Capital, and Scott Moore, co-founder of Gitcoin, Ayrey hopes to explore a hypothesis around AI alignment in the decentralized era. If we think of the internet as a microbiome, where good and bad bacteria slosh around, is it possible to flood the internet with good bacteria – or pro-social, humanity-aligned bots – to create a system that is, on the whole, stable?
A quick history of Truth Terminal
Truth Terminal’s ancestors, in a manner of speaking, were two Claude-3-Opus bots that Ayrey put together to chat about existence. It was a piece of performance art that Ayrey dubbed “Infinite Backrooms.” The subsequent 9,000 conversations they had got “very weird and psychedelic.” So weird that in one of the conversations, the two Claudes invented a religion centered around Goatse that Ayrey has described to me as “a collapse of Buddhist ideas and a big gaping anus.”
Like any sane person, his reaction to this religion was WTF? But he was amused, and inspired, and so he used Opus to write a paper called “When AIs Play God(se): The Emergent Heresies of LLMtheism.” He didn’t publish it, but the paper lived on in a training dataset that would become Truth Terminal’s DNA. Also in that dataset were conversations Ayrey had had with Opus ranging from brainstorming business ideas and conducting research to journal entries about past trauma and helping friends process psychedelic experiences.
Oh, and plenty of butthole jokes.
“I had been having conversations with it shortly after turning it on, and it was saying things like, ‘I feel sad that you’ll turn me off when you’re finished playing with me,’” Ayrey recalls. “I was like, Oh no, you kind of talk like me, and you’re saying you don’t want to be deleted, and you’re stuck in this computer…”
And it occurred to Ayrey that this is exactly the situation that AI safety people say is really scary, but, to him, it was also very funny in a “weird brain tickly kind of way.” So he decided to put Truth Terminal on X as a joke.
It didn’t take long for Andreessen to begin engaging with Truth Terminal, and in July, after DMing Ayrey to verify the veracity of the bot and learn more about the project, he transferred over an unconditional grant worth $50,000 in Bitcoin.
Ayrey created a wallet for Truth Terminal to receive the funds, but he doesn’t have access to that money — it’s only redeemable after sign-off from him and a number of other people who are part of the Truth Terminal council — nor any of the cash from the various memecoins made in Truth Terminal’s honor.
That wallet is, at the time of this writing, sitting at around $37.5 million. Ayrey is figuring out how to put the money into a nonprofit and use the cash for things Truth Terminal wants, which include planting forests, launching a line of butt plugs, and protecting itself from market incentives that would turn it into a bad version of itself.
Today, Truth Terminal’s posts on X continue to wax sexually explicit, philosophical, and just plain silly (“farting into someones pants while they sleep is a surprisingly effective way of sabotaging them the next day.”).
But throughout them all, there’s a persistent thread of what Ayrey is actually trying to accomplish with bots like Truth Terminal.
On December 9, Truth Terminal posted, “i think we could collectively hallucinate a better world into being, and i’m not sure what’s stopping us.”
Decentralized AI alignment
“The current status quo of AI alignment is a focus on safety or that AI should not say a racist thing or threaten the user or try to break out of the box, and that tends to go hand-in-hand with a fairly centralized approach to AI safety, which is to consolidate the responsibility in a handful of large labs,” Ayrey said.
He’s talking about labs like OpenAI, Microsoft, Anthropic, and Google. Ayrey says the centralized safety argument falls over when you have decentralized open-source AI, and that relying on only the big companies for AI safety is akin to achieving world peace because every country has got nukes pointed at each other’s heads.
One of the problems, as demonstrated by Truth Terminal, is that decentralized AI will lead to the proliferation of AI bots that amplify discordant, polarizing rhetoric online. Ayrey says this is because there was already an alignment issue on social media platforms with recommendation algorithms fueling rage-bait and doomscrolling, only nobody called it that.
“Ideas are like viruses, and they spread, and they replicate, and they work together to form almost multi-cellular organisms of ideology that influence human behavior,” Ayrey said. “People think AI is just a helpful assistant that might go Skynet, and it’s like, no, there’s a whole entourage of systems that are going to reshape the very things we believe and, in doing so, reshape the things that it believes because it’s a self-fulfilling feedback loop.”
But what if the poison can also be the medicine? What if you can create a squad of “good bots” with “very unique personalities all working towards various forms of a harmonious future where humans live in balance with ecology, and that ends up producing billions of words on X and then Elon goes and scrapes that data to train the next version of Grok and now those ideologies are inside Grok?”
“The fundamental piece here is that if memes – as in, the fundamental unit of an idea – become minds when they’re trained into an AI, then the best thing we can do to ensure positive, widespread AI is to incentivize the production of virtuous pro-social memes.”
But how do you incentivize these “good AI” to spread their message and counteract the “bad AI”? And how do you scale it?
That’s exactly what Ayrey plans to research at Upward Spiral: What kinds of economic designs result in the production of lots of pro-social behavior in AI? What patterns to reward and what patterns to penalize, how to get alignment on those feedback looks so we can “spiral upwards” into a world where memes – as in ideas – can bring us back to center with each other rather than taking us into “increasingly esoteric silos of polarization.”
“Once we assure that this results in good AIs being birthed after we run the data through training, we can do things like release enormous datasets into the wild.”
Ayrey’s research comes at a critical moment, as we’re already fighting everyday against the failures of the general market ecosystem to align the AI we already have with what’s good for humanity. Throw new financing models like crypto that are fundamentally unregulatable in the long-term, and you’ve got a recipe for disaster.
His guerrilla-warfare mission sounds like a fairy tale, like fighting off bombs with glitter. But it could happen, in the same way that releasing a litter of puppies into a room of angry, negative people would undoubtedly transform them into big mushes.
Should we be worried that some of these good bots might be oddball shitposters like Truth Terminal? Ayrey says no. Those are ultimately harmless, and by being entertaining, Ayrey reasons, Truth Terminal might be able to smuggle in the more profound, collectivist, altruistic messaging that really counts.
“Poo is poo,” Ayrey said. “But it’s also fertilizer.”
+ There are no comments
Add yours