Medical AI is more accurate, but patients still trust human advice more
Study findings can be applied to legal, insurance, education and travel
September 8, 2022
Study findings can be applied to legal, insurance, education and travel
Scientists have discovered that people trust preventative care advice from humans more than AI-powered notifications.
A recent study from researchers at Nanyang Technological University in South Korea and NTU Singapore looked at how seriously people take advice from AI-prompted interventions.
The health care industry is increasingly shifting to AI adoption to diagnose, screen and treat patients. AI has the potential to “revolutionize health care” and can increase both issues in productivity and the efficiency of care delivery, according to McKinsey. The global AI health care market was worth $6.6 billion in 2021, but is expected to be worth $95 billion by 2028, according to Vantage Market Research.
In a bid to understand patient perceptions of AI, the researchers found that human touch is still imperative for effective preventative healthcare programs. Preventative care is aimed at decreasing health risks by encouraging people to get vaccinations, sign up for health screenings, or perform physical exercises.
Upon publishing the results in the journal Production and Operations Management, the scientists believe their results could also be applied to the legal, insurance, education, and travel industries.
"Despite the potential of artificial intelligence to provide higher quality interventions, we found that people have lower trust in health interventions suggested by or derived from AI alone, as compared to those they perceive to be based on human expert opinion," said Hyeokkoo Eric Kwon, lead author and assistant professor at NTU Nanyang Business School (NBS).
The study involved 9,000 users of a mobile health app in South Korea. The app displayed personalized pop-up notifications generated by an AI algorithm, encouraging users to walk. Next, the app counted the number of steps taken.
The participants were divided evenly into three groups. The AI-prompt intervention group received instructions that began with “AI recommends that you walk (personalized step goal) number of steps in the next seven days. Would you like to participate?” The next group was given directions that replaced “AI” with “health expert.” The control group received a neutral message without mentioning a human or AI.
In the human-suggested intervention group, 22% accepted the invitation and 13% reached their goal, higher than the AI-generated intervention group.
The researchers added two more groups of users: one group was told how AI generated the intervention and the other group was told how AI was used in collaboration with human health experts. The group that was instructed about AI’s collaboration with humans led to a 27% acceptance rate. The transparency of AI’s role led to a 21% acceptance rate.
“Our study shows that the affective human element, which is linked to emotions and attitudes, remains important even as health interventions are increasingly guided by AI, and that such technology works best when complementing humans, rather than replacing them,” said Kwon.
You May Also Like