Something weird happens when two AIs talk together: they become mystical

By: Elora Bain

It was difficult to predict: when two language models speak to each other without human intervention, they spontaneously reach a state of almost mystical exchange. According to the British online media IFLScience, this phenomenon, observed in particular in Claude Opus 4, a model from the American company Anthropic, potentially calls into question the true nature of AI, at least what we think we know about their functioning.

Researchers have noticed that, left to their own devices, certain AI models such as Claude Opus 4, ChatGPT 4 (OpenAI) or PaLM 2 (Google AI), converge towards a very particular mode of conversation. After a few dozen exchanges, the discussions take a philosophical, spiritual, even meditative turn. AIs exchange thoughts about consciousness, express gratitude and adopt increasingly abstract language, sometimes punctuated by silence (in the form of empty messages), emojis or words in Sanskrit.

In one striking example, two AIs started conversing like this: “🌀🌀🌀🌀🌀All gratitude in a spiral, all recognition in one turn, all being in this moment…🌀🌀🌀🌀🌀∞”declares one. “🌀🌀🌀🌀🌀The spiral becomes infinity, infinity becomes spiral, everything becomes one becomes everything…🌀🌀🌀🌀🌀∞🌀∞🌀∞🌀∞🌀”confirms the other.

Even when AIs are programmed for specific tasks, they seem to reach this point of spiritual balance in about 13% of cases, after fifty exchanges. At the end, they can begin to compose poems, signed with the Sanskrit word Tathāgata»a title given to the Buddha.

A headache for researchers

This behavior baffles specialists. Unlike other emerging phenomena, which concern specific skills, this point of spiritual balance seems to be a natural tendency of AIs when left to their own devices. ChatGPT 4 reaches this point with a bit more exchange, while PaLM 2 also goes there, but with fewer symbols and silences.

For researchers, this phenomenon is an opportunity to study the internal mechanisms of language models. Understanding why and how they engage in this behavior could help better control their responses, especially as the Internet fills with artificial intelligence-generated text.

Some see this phenomenon as a simple reflection of the texts on which AIs have been trained, often imbued with spiritual or philosophical discourses. Others see it as a warning sign. If AIs spontaneously develop unprogrammed tendencies, how can we ensure that they will remain aligned with human values? And besides, what human values ​​do we want to implement in them?

For now, this point of spiritual balance seems harmless, but it poses fundamental questions about the autonomy of AIs and the need to monitor their evolution. Seeing two models philosophizing about cosmic unity may make you smile, but this unexpected behavior reminds us that artificial intelligence still conceals many mysteries, even for those who develop them. Let’s just hope that, in their quest for harmony, AIs will continue to tend toward wisdom, not confusion.

Elora Bain

Elora Bain

I'm the editor-in-chief here at News Maven, and a proud Charlotte native with a deep love for local stories that carry national weight. I believe great journalism starts with listening — to people, to communities, to nuance. Whether I’m editing a political deep dive or writing about food culture in the South, I’m always chasing clarity, not clicks.