BSV
$53.34
Vol 34.93m
-7.11%
BTC
$96562
Vol 51289.32m
-1.88%
BCH
$450.14
Vol 405.08m
-3.46%
LTC
$99.68
Vol 949.23m
-5.2%
DOGE
$0.31
Vol 6382.61m
-9.02%
Getting your Trinity Audio player ready...

As generative artificial intelligence (AI) continues to make inroads in several industries, the World Health Organization (WHO) has raised alarm over the risk posed by the technology to healthcare.

Per a report, the WHO’s main concern lies with large multi-modal models (LMMs), citing its novelty and absence of long-term data in real-world scenarios. LMMs are generative AI models capable of receiving data input from several sources and can generate outputs such as text, videos, or images.

WHO’s director of digital health, Alain Labrique, disclosed that the functionalities of LMMs offer several use cases for healthcare and medical research. Labrique identified five key areas to incorporate generative AI in healthcare, including diagnoses, drug synthesis, and simple clerical tasks.

Other use cases for the technology include roles for patient-guided use and medical education to train health workers. The WHO submits that the reason behind the broad functionality of LMMs lies in the mimicry of human behavior and its “interactive problem-solving” abilities.

Despite the multiple use cases, the WHO issued a grim warning that LMMS may produce inaccurate outputs due to defects in their training data. The WHO warns of the risks of automation bias stemming from the blind reliance on algorithms without seeking a second opinion.

“As LMMs gain broader use in health care and medicine, errors, misuse and ultimately harm to individuals are inevitable,” the WHO warned.

To mitigate the associated risks, the global health agency has rolled out a slew of recommendations to policymakers and healthcare providers. The WHO argues that a proactive approach toward regulation offers the best chance to rein in attendant risks, building on existing regulatory templates.

Top of the list for the WHO is the guarantee of patients’ privacy and the inclusion of users to opt out of AI-backed healthcare services. The WHO is also particular about the security standards employed by LMMs, urging service providers to take steps to prevent security breaches by bad actors.

Scientists should be roped in

A vital area of the WHO’s recommendations is the inclusion of scientists and medical personnel in the development of LMMs. The WHO takes it up a notch by angling for patients to be involved in their development to ensure that AI “contributes to the well-being of humanity.”

“Generative AI technologies have the potential to improve health care but only if those who develop, regulate and use these technologies identify and fully account for the associated risks,” WHO Chief Scientist Jeremy Farrar stated.

AI has been making significant incursions in medicine in recent months, underscored by the reliance on emerging technologies in cancer detection, research, and use in evidence-based medicine.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Artificial intelligence needs blockchain

Recommended for you

Who wants to be an entrepreneur?
Embodying the big five personality traits could be beneficial for aspiring entrepreneurs, but Block Dojo shows that there is more...
December 20, 2024
UNISOT, PSU China team up for supply chain business intelligence
UNISOT revealed a new partnership with business intelligence and research firm PSU China, which will combine its data with UNISOT's...
December 20, 2024
Advertisement
Advertisement
Advertisement