IA: ChatGPT would be more empathetic than doctors in its responses to patients

2023-05-03 08:45:42

After submitting medical examinations to the ChatGPT software with varying degrees of success, researchers set about comparing the empathy skills of this conversational agent to those of real health professionals. Doctors are often blamed a certain coldness, particularly in announcements of diagnosis or death. Does artificial intelligence have this defect?

To find out, an American research team undertook a study, the results of which are published in the JAMA Network (Source 1). The researchers used questions posted on an online forum, named r/AskDocs with around 474,000 members, where verified and volunteer healthcare professionals provide answers. They asked ChatGPT to answer patients’ questions, and compared the quality of its answers and their empathetic nature to the answers given by healthcare professionals.

In all, 195 questions were picked at random from the forum. Blinded, that is to say without knowing which of the health professional or ChatGPT was at the origin of the answer, evaluators, who are also health professionals themselves, compared the answers. The quality of the information provided was evaluated (classified in 5 categories, from “very bad” to “very good”), as well as the empathetic nature or not of the response (classified in 5 categories ranging from “not empathetic” to “very empathetic”).

Unequivocal figures, but to be contextualized

Verdict: Reviewers preferred ChatGPT responses over healthcare professionals in 78.6% of the 585 reviews screened.

The number of words used also varied greatly between the two types of respondents, as doctors provided answers ranging from 17 to 62 words, where ChatGPT formulated answers from 168 to 245 words.

The proportion of responses considered to be of “good” or “very good” quality was around 80% for ChatGPT, while it was only 22% for health professionals. When it comes to empathy, ChatGPT continued to outperform doctors, with 45% of ChatGPT responses considered “empathetic” or even “very empathetic”while only 4.6% responses from physicians were considered as such.

If the researchers remain measured about these results, stressing that it is not known how ChatGPT would manage in the real world, facing a patient in a doctor’s office or in the hospital. That said, in view of these results, they believe that it would not be foolish to using artificial intelligence to improve medical responses in online forums.

#ChatGPT #empathetic #doctors #responses #patients

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.