Researchers from Penn State and the University of California, Santa Barbara found that people are less likely to follow the advice of an AI doctor that knows their name and medical history. Their two-phase study randomly assigned participants to chatbots that identified themselves as either AI, human, or human assisted by AI. The first part of the study was framed as a visit to a new doctor on an e-health platform.  [Read more: This dude drove an EV from the Netherlands to New Zealand — here are his 3 top road trip tips] The 295 participants were first asked to fill out a health form. They then read the following description of the doctor they were about to meet: The doctor then entered the chat and the interaction began. Each chatbot was programmed to ask eight questions about COVID-19 symptoms and behaviours. Finally, they offered diagnosis and recommendations based on the CDC Coronavirus Self-Checker. Around 10 days later, the participants were invited to a second session. Each of them was matched with a chatbot with the same identity as in the first part of the study. But this time, some were assigned to a bot that referred to details from their previous interaction, while others were allocated a bot that made no reference to their personal information. After the chat, the participants were given a questionnaire to evaluate the doctor and their interaction. They were then told that all the doctors were bots, regardless of their professed identity.

Diagnosing AI

The study found that patients were less likely to heed the advice of AI doctors that referred to personal information — and more likely to  consider the chatbot intrusive. However, the reverse pattern was observed in views on chatbots that were presented as human. Per the study paper: The findings about human doctors, however, come with a caveat: 78% of participants in this group thought they’d interacted with an AI doctor. The researchers suspect this was due to the chatbots’ mechanical responses and the lack of a human presence on the interface, such as a profile photo. Ultimately, the team hopes that the research leads to improvements in how medical chatbots are designed. It could also offers pointers on how human doctors should interact with patients online. You can read the study paper here. Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.