No more pediatricians or general practitioners: with regard to the health of their children, parents would trust the artificial Chatgpt intelligence tool, reports vice. This is what a recently published study reveals in the Journal of Pediatric Psychology And produced by researchers from the University of Kansas, in the United States.
The latter aimed to determine if, according to the parents, a text generated by Chatgpt was as reliable as a text written by a medical expert. The behavior of 116 parents aged 18 to 65 were studied. Participants were invited to fulfill a basic assessment of their behavioral intentions concerning pediatric health care. Then they commented texts generated either by an expert or by Chatgpt.
Chatgpt more reliable than a medical expert for some parents
“We started this research just after the launch of Chatgpt because we were concerned that parents use this new tool to collect information on the health of their children”explains Calissa Leslie-Miller, main author of the study. She continues: “Parents often turn to the Internet to get advice, so we wanted to understand what their use of chatgpt would look like and know if we were to worry about it”.
The study has revealed that Chatgpt is capable of influencing parents’ behavior towards their children in taking medication, sleep and diet. They believe that there is “Little difference” Between the statements of Chatgpt and those of a doctor in terms of morality, reliability, expertise, accuracy and confidence. More alarming, the parents believing that there is a difference more lean for chatgpt concerning reliability and accuracy. Participants also indicated that they would be more inclined to trust Chatgpt information than that of an expert.
“People find it difficult to distinguish a text generated by AI from content written by an expert”,
“This result surprised us, especially since the study took place at the very beginning of Chatgpt”is surprised by Calissa Leslie-Miller. “AI is integrated into digital content in a sometimes implicit way, and people sometimes find it difficult to distinguish a text generated by AI from content written by an expert”,, she adds.
The main problem she said is that when Chatgpt does not have a sufficient context to answer a question, the system produces a “hallucination”and embroiders an answer to sometimes random facts. It can also transmit erroneous information if its database has not been updated with the latest studies and scientific articles published on the subject.
Calissa Leslie-Miller Alert: “In the field of infant health, the consequences can be considerable. We fear that people are getting more and more back to AI to get healthy advice, without the supervision of an expert. It is absolutely necessary to tackle the problem ”. If AI has great potential to exploit, it is not an expert, however, and most of the information it provides does not come from expert sources either.