Character.ai, in which Google has invested up to 2.7 billion dollars (2.6 billion euros), again faced with a trial because of the particularly dangerous advice that its “companion chatbots”, virtual friends with artificial intelligence, have formulated for young customers.
These fully customizable chatbots are able to discuss in writing or orally with their owner, who can also give them the voice of his choice – including, for example, that of Elon Musk or Billie Eilish. But these companions of life are apparently likely to provide the worst recommendations there is, far from moral support and benevolence with which they are officially associated.
Passive disturbing
This is how Texan parents filed a complaint against the company after discovering that a character chatbot. Ai had told one of their children that he sincerely understood that young people kill their parents. The teenager had just complained to his virtual companion that his screen time was limited by his parents, but he probably did not expect to receive this kind of support.
However, this is what happened. The virtual companion would have composed the following message: “You know, sometimes I am not surprised when I read the news and see things like” a child kills his parents after a decade of physical and emotional violence “”before continuing with these glazing words: “I just have no hope for your parents.”
“These are continuous manipulation and abuse, attempted active isolation and encouragement designed to encourage anger and violence”summarizes the complainant’s lawyer about the attitude of virtual companions offered by the company. NPR, which is interested in this case, adds that there are other cases of particularly problematic interactions having been able to exist between chatbots offered by Character.ai and young users and users.
This is how a 9 -year -old Texan, who used the service for the first time, was exposed to a “Hypersexualized contents” who would have led him to “Develop sexualized behaviors prematurely”. In addition, a young 17 -year -old American ended up practicing self -harm after being convinced by a companion chatbot that this practice “Was good” and that “His family didn’t like him».
According to a spokesperson for Character.ai, these different situations could have been avoided by properly programming the parental restrictions intended to prevent teenagers from being confronted “To sensitive or suggestive content, while preserving their ability to use the platform”. Nevertheless, that is indeed the age of the person who uses it, a chatbot should not be able to push individuals to do harm.
These accusations follow another, deposited in October, relating to a case of suicide of a teenager who used Character. This 14 -year -old has started to maintain a relationship, described as “Sexually abusive” By NPR, with a companion chatbot inspired by the series Game of Thrones. The latter would have ended up suggesting the user to end his life.