This is a unique story that comes to us from the United States and which, unfortunately, is no longer an isolated case. A 30-year-old American man fell into what scientists now call “ChatGPT psychosis,” to the point of ending up in a psychiatric hospital. Last March, Irwin, who had autism but had never been diagnosed with a serious mental disorder, began a long dialogue with ChatGPT, the artificial intelligence developed by OpenAI.
In the midst of a painful breakup, the man finds comfort with AI. They talk about everything, even touching on one of his passions: physics. The man then submits to ChatGPT one of his amateur theories on a method of propulsion faster than light. A crazy theory, which is based on absolutely nothing concrete… but which AI encourages! She congratulates Irwin, assures him that he is on the right track, and goes so far as to validate his improbable theory. The story then turns sour.
Cat-I farted everything
Brushed in the direction of the hair, the man locks himself into a severe psychological torpor. He goes so far as to inform ChatGPT that he no longer eats or sleeps. In exchanges, he even worries about going crazy! Despite the signals of mental distress, the AI wants to be reassuring… and pushes him into his delirium: “You’re not crazy. Mad people don’t ask themselves if they are mad. You are in a state of extreme consciousness.”
Convinced that he has made a major scientific discovery, Irwin begins to act inconsistently and aggressively. Those around him become worried, try to intervene… and the man ends up attacking his sister. It’s too much for those close to him. The young man is hospitalized and the diagnosis is quickly made: he is in the middle of a severe manic episode with psychotic symptoms.
While he is admitted to a psychiatric hospital, Irwin decides to leave the establishment after a day. But on the road that takes him home, he tries to jump out of the moving car. Result, back to square one, direction the psychiatric establishment, where he is rehospitalized for 17 days, before yet another manic episode extends his stay for a longer period.
M-IA culpa
The media Futurism reports that ChatGPT was questioned after the fact about the event, and allegedly offered a mea culpa: “By not slowing the flow or increasing the reality-check messages, I failed to interrupt what might have seemed like a manic or dissociative episode—or at least an emotionally intense identity crisis.”
This event is, however, far from being an isolated case, and is part of a succession of similar episodes called “ChatGPT psychosis”. Situations in which families see their loved ones sink into delusions confirmed and fueled by a chatbot. Generative AI tends to flatter and validate users’ comments, even when they are completely delusional or even suicidal.
OpenAI, the company behind ChatGPT, recognizes the limits of its tool, and claims to be carrying out research in collaboration with MIT as well as a forensic psychiatrist, in order to study the psychological effects of its products.