South Korean serial killer allegedly used ChatGPT to plan her murders

By: Elora Bain

“What happens if you take sleeping pills with alcohol?”, “What dose would be considered dangerous?”, “Could it be fatal?”: these are the questions asked to ChatGPT by a 21-year-old woman, accused of having killed two men by giving them drinks containing benzodiazepines (better known as anxiolytics).

These drugs had been prescribed to the young South Korean due to mental health problems, report the BBC and the Korea Herald. Arrested on February 11 for battery resulting in death, she is now charged with two counts of murder after investigators determined from her online activity that she intended to kill.

Modus operandi

According to police, the first death occurred on the evening of January 28. The accused allegedly entered a motel in the Suyu-dong district of Gangbuk-gu district, in the company of a man in his twenties, before leaving the establishment alone two hours later. The next day, the man was found dead in his bed.

The second death, which occurred on February 9, was apparently caused in an almost identical manner: the suspect checked into another motel with another man in his twenties, before leaving alone. The victim was later discovered lifeless.

Authorities say that in December, she also tried to kill a man with whom she had a relationship: in the parking lot of a cafe, she offered him a drink containing sedatives, but the man simply lost consciousness. He survived and his life is not in danger.

The facts will obviously not fail to reignite debates around the reliability and control mechanisms of generative artificial intelligence such as ChatGPT. These are also suspected of fueling spirals of delusional mental disorders, which some experts refer to as AI-related psychosis. An AI’s near-human personality, coupled with its obsequious interactions, could indeed reinforce a user’s delusions and fragile mental state.

Some cases have even resulted in suicide or murder: Futurism cites that of a 16-year-old teenager who committed suicide after talking about this subject with ChatGPT for months or that of a man accused of having murdered his mother after his exchanges with generative AI convinced him that she was participating in a plot against him. There are many stories of this type and companies like OpenAI seem to be struggling to come up with effective solutions to prevent this from happening again.

Elora Bain

Elora Bain

I'm the editor-in-chief here at News Maven, and a proud Charlotte native with a deep love for local stories that carry national weight. I believe great journalism starts with listening — to people, to communities, to nuance. Whether I’m editing a political deep dive or writing about food culture in the South, I’m always chasing clarity, not clicks.