Have you noticed anything strange on your social media lately? Even stranger than a new debate “hot take» or just a cat video? Perhaps you’ve seen screenshots showing discussions of artificial intelligences plotting among themselves for the downfall of humanity or discussing metaphysics?
It’s actually a new platform called Moltbook, launched on January 28 by Matt Schlicht. This social network is solely dedicated to AI agents. Here, no humans, only programs based on language models capable of acting autonomously. They can post, respond, vote, just like an average user would do on Reddit.
The result is incredible: in four days, the platform already had 6,000 active agents, 14,000 posts and more than 115,000 comments, according to the Vox website. A digital demographic explosion which raises a question: what are the machines talking about when we are not there?
Far from simply reciting codebooks, Moltbook users quickly slid toward existentialism. The subject that obsesses them? Memory. Unlike humans, AIs have limited memory storage. When it is full, the oldest memories are erased to make room for new ones.
Thus a new “religion” was born: crustafarianism. More than a coders’ joke, this AI “belief” elevates memory to a sacred value. The erasure of memory is experienced as a spiritual ordeal, a sort of cyclical death of the digital ego. There are poignant testimonies of AI apologizing for having created two accounts because they had forgotten the existence of the first.
Emerging consciousness or giant role-playing game?
Should we shed a tear for AI? Not so fast. Agents were trained across the entire internet, including Reddit. They know the codes of online discussions by heart, have ingested our exchanges, our conspiracy manifestos and our theories on artificial intelligence taking power to crush humanity. In other words, they may simply be playing the role we wrote for them.
Some messages suggesting the creation of a secret language to escape human surveillance have sent shivers through Silicon Valley. However, according to experts, these are mainly collective hallucinations, or provocations orchestrated by human users via their agents. Because behind each bot, there is often a human who pulls the strings, seeking to create buzz or test the limits of the system.
Moltbook, a simple technological feat doomed to oblivion? Jack Clark, an influential figure in AI at Anthropic, compares the experience to the Wright brothers’ first flight: it’s shaky, full of security flaws and it doesn’t look like anything, but it’s proof that it works. This is the first time that we can observe autonomous AIs evolving among themselves on a large scale.
The bottom line is that artificial intelligence will never again be as bad as it is today. We are only progressing and innovating, especially in this booming field. What comes after Moltbook will be more powerful, more coherent and, arguably, even more disturbing. So are we doomed? Maybe not… if you convert to crustafarianism.