World Health Organization sign and symbol

WHO warns of AI risks in healthcare, calls for tighter regulatory framework

As generative artificial intelligence (AI) continues to make inroads in several industries, the World Health Organization (WHO) has raised alarm over the risk posed by the technology to healthcare.

Per a report, the WHO’s main concern lies with large multi-modal models (LMMs), citing its novelty and absence of long-term data in real-world scenarios. LMMs are generative AI models capable of receiving data input from several sources and can generate outputs such as text, videos, or images.

WHO’s director of digital health, Alain Labrique, disclosed that the functionalities of LMMs offer several use cases for healthcare and medical research. Labrique identified five key areas to incorporate generative AI in healthcare, including diagnoses, drug synthesis, and simple clerical tasks.

Other use cases for the technology include roles for patient-guided use and medical education to train health workers. The WHO submits that the reason behind the broad functionality of LMMs lies in the mimicry of human behavior and its “interactive problem-solving” abilities.

Despite the multiple use cases, the WHO issued a grim warning that LMMS may produce inaccurate outputs due to defects in their training data. The WHO warns of the risks of automation bias stemming from the blind reliance on algorithms without seeking a second opinion.

“As LMMs gain broader use in health care and medicine, errors, misuse and ultimately harm to individuals are inevitable,” the WHO warned.

To mitigate the associated risks, the global health agency has rolled out a slew of recommendations to policymakers and healthcare providers. The WHO argues that a proactive approach toward regulation offers the best chance to rein in attendant risks, building on existing regulatory templates.

Top of the list for the WHO is the guarantee of patients’ privacy and the inclusion of users to opt out of AI-backed healthcare services. The WHO is also particular about the security standards employed by LMMs, urging service providers to take steps to prevent security breaches by bad actors.

Scientists should be roped in

A vital area of the WHO’s recommendations is the inclusion of scientists and medical personnel in the development of LMMs. The WHO takes it up a notch by angling for patients to be involved in their development to ensure that AI “contributes to the well-being of humanity.”

“Generative AI technologies have the potential to improve health care but only if those who develop, regulate and use these technologies identify and fully account for the associated risks,” WHO Chief Scientist Jeremy Farrar stated.

AI has been making significant incursions in medicine in recent months, underscored by the reliance on emerging technologies in cancer detection, research, and use in evidence-based medicine.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Artificial intelligence needs blockchain

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.