BSV
$45.58
Vol 15.14m
-4.57%
BTC
$67981
Vol 45078.78m
-1.65%
BCH
$330.4
Vol 266.07m
-2.84%
LTC
$66.08
Vol 327.81m
-1.91%
DOGE
$0.16
Vol 3239.07m
6.39%
Getting your Trinity Audio player ready...

As generative artificial intelligence (AI) continues to make inroads in several industries, the World Health Organization (WHO) has raised alarm over the risk posed by the technology to healthcare.

Per a report, the WHO’s main concern lies with large multi-modal models (LMMs), citing its novelty and absence of long-term data in real-world scenarios. LMMs are generative AI models capable of receiving data input from several sources and can generate outputs such as text, videos, or images.

WHO’s director of digital health, Alain Labrique, disclosed that the functionalities of LMMs offer several use cases for healthcare and medical research. Labrique identified five key areas to incorporate generative AI in healthcare, including diagnoses, drug synthesis, and simple clerical tasks.

Other use cases for the technology include roles for patient-guided use and medical education to train health workers. The WHO submits that the reason behind the broad functionality of LMMs lies in the mimicry of human behavior and its “interactive problem-solving” abilities.

Despite the multiple use cases, the WHO issued a grim warning that LMMS may produce inaccurate outputs due to defects in their training data. The WHO warns of the risks of automation bias stemming from the blind reliance on algorithms without seeking a second opinion.

“As LMMs gain broader use in health care and medicine, errors, misuse and ultimately harm to individuals are inevitable,” the WHO warned.

To mitigate the associated risks, the global health agency has rolled out a slew of recommendations to policymakers and healthcare providers. The WHO argues that a proactive approach toward regulation offers the best chance to rein in attendant risks, building on existing regulatory templates.

Top of the list for the WHO is the guarantee of patients’ privacy and the inclusion of users to opt out of AI-backed healthcare services. The WHO is also particular about the security standards employed by LMMs, urging service providers to take steps to prevent security breaches by bad actors.

Scientists should be roped in

A vital area of the WHO’s recommendations is the inclusion of scientists and medical personnel in the development of LMMs. The WHO takes it up a notch by angling for patients to be involved in their development to ensure that AI “contributes to the well-being of humanity.”

“Generative AI technologies have the potential to improve health care but only if those who develop, regulate and use these technologies identify and fully account for the associated risks,” WHO Chief Scientist Jeremy Farrar stated.

AI has been making significant incursions in medicine in recent months, underscored by the reliance on emerging technologies in cancer detection, research, and use in evidence-based medicine.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Artificial intelligence needs blockchain

Recommended for you

FINRA: Metaverse to hit $3T by 2031, but poses regulatory risks
FINRA says it has observed more players in the securities industry diving into the metaverse but warns that they must...
November 4, 2024
This Week in AI: US tightens AI restrictions on China
The U.S. issued a rule restricting American investments in China, Hong Kong, and Macau, specifically within industries like AI, semiconductors,...
November 1, 2024
Advertisement
Advertisement
Advertisement