June 30, 2021
The report is the culmination of two years of consultations by an international panel of WHO-appointed experts
AI systems used in healthcare and medicine must have ethics and human rights at the heart of their design, deployment, and use, the World Health Organization (WHO) warned.
In a guidance document titled ‘Ethics and governance of artificial intelligence for health,’ the UN agency said that AI must be transparent, inclusive, and protect human autonomy.
Despite showing “great promise” for improving healthcare, the 165-page document cautions against overestimating the benefits of AI – “especially when this occurs at the expense of core investments and strategies required to achieve universal health coverage.”
"Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology, it can also be misused and cause harm," said Dr. Tedros Adhanom Ghebreyesus, WHO Director-General.
Ghebreyesus said the report “provides a valuable guide for countries on how to maximize the benefits of AI while minimizing its risks and avoiding its pitfalls."
Transparency and inclusiveness
The WHO report is the result of two years of consultations by a panel of international experts.
It noted that AI was already being used to improve the speed and accuracy of diagnosis and screening in some wealthy countries. AI was also being used to accelerate health research and drug development, and support diverse public health interventions, like disease surveillance and outbreak response.
At the same time, the document warned that unregulated uses of AI in healthcare could “subordinate the rights and interests of patients and communities to the powerful commercial interests of technology companies or the interests of governments in surveillance and social control.”
The report also emphasized that models trained primarily on data collected from individuals in high-income countries may not perform well for individuals in countries with low- and middle-income.
The WHO called on those developing AI systems in healthcare settings to design their products and services to reflect the diversity of socio-economic backgrounds in their respective industries.
"They should be accompanied by training in digital skills, community engagement, and awareness-raising, especially for millions of healthcare workers who will require digital literacy or retraining if their roles and functions are automated, and who must contend with machines that could challenge the decision-making and autonomy of providers and patients," the report stated.
The document offers six principles that the WHO said would "ensure AI works for the public interest in all countries."
Those principles are:
Protecting human autonomy
Promoting human well-being and safety and the public interest
Ensuring transparency, explainability, and intelligibility
Fostering responsibility and accountability
Ensuring inclusiveness and equity
Promoting AI that is responsive and sustainable
The principles stress the need for humans to remain in control of both AI systems and medical decisions, and for any such systems to provide easily accessible information on their uses and types of data collected.
AI systems should be designed to minimize their environmental consequences and increase energy efficiency, the WHO said, adding that both governments and private companies should address anticipated disruptions in the workplace, caused by automation.
“These principles will guide future WHO work to support efforts to ensure that the full potential of AI for healthcare and public health will be used for the benefits of all,” the agency said.
About the Author(s)
You May Also Like