11-21-2024
BSV
$67
Vol 202.88m
0.15%
BTC
$98402
Vol 118021.96m
4.93%
BCH
$484.59
Vol 2158.34m
11.19%
LTC
$89.61
Vol 1391.92m
7.13%
DOGE
$0.38
Vol 9636.56m
2.92%
Getting your Trinity Audio player ready...

As generative artificial intelligence (AI) continues to make inroads in several industries, the World Health Organization (WHO) has raised alarm over the risk posed by the technology to healthcare.

Per a report, the WHO’s main concern lies with large multi-modal models (LMMs), citing its novelty and absence of long-term data in real-world scenarios. LMMs are generative AI models capable of receiving data input from several sources and can generate outputs such as text, videos, or images.

WHO’s director of digital health, Alain Labrique, disclosed that the functionalities of LMMs offer several use cases for healthcare and medical research. Labrique identified five key areas to incorporate generative AI in healthcare, including diagnoses, drug synthesis, and simple clerical tasks.

Other use cases for the technology include roles for patient-guided use and medical education to train health workers. The WHO submits that the reason behind the broad functionality of LMMs lies in the mimicry of human behavior and its “interactive problem-solving” abilities.

Despite the multiple use cases, the WHO issued a grim warning that LMMS may produce inaccurate outputs due to defects in their training data. The WHO warns of the risks of automation bias stemming from the blind reliance on algorithms without seeking a second opinion.

“As LMMs gain broader use in health care and medicine, errors, misuse and ultimately harm to individuals are inevitable,” the WHO warned.

To mitigate the associated risks, the global health agency has rolled out a slew of recommendations to policymakers and healthcare providers. The WHO argues that a proactive approach toward regulation offers the best chance to rein in attendant risks, building on existing regulatory templates.

Top of the list for the WHO is the guarantee of patients’ privacy and the inclusion of users to opt out of AI-backed healthcare services. The WHO is also particular about the security standards employed by LMMs, urging service providers to take steps to prevent security breaches by bad actors.

Scientists should be roped in

A vital area of the WHO’s recommendations is the inclusion of scientists and medical personnel in the development of LMMs. The WHO takes it up a notch by angling for patients to be involved in their development to ensure that AI “contributes to the well-being of humanity.”

“Generative AI technologies have the potential to improve health care but only if those who develop, regulate and use these technologies identify and fully account for the associated risks,” WHO Chief Scientist Jeremy Farrar stated.

AI has been making significant incursions in medicine in recent months, underscored by the reliance on emerging technologies in cancer detection, research, and use in evidence-based medicine.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Artificial intelligence needs blockchain

Recommended for you

BIT Mining hit with $10M fine over bribery charges
In its previous existence as a casino and sports lottery firm, BIT Mining reportedly paid $2 million in bogus consultation...
November 21, 2024
Donald Trump’s role in the ‘crypto’ boom
Donald Trump pledged to make the United States the "crypto capital of the world." For the first time in nearly...
November 21, 2024
Advertisement
Advertisement
Advertisement