Vaccine bottles with syringe on the World Health Organization flag

WHO explores AI integration in public health, regulation at annual roundtable discussion

As artificial intelligence (AI) enters healthcare, the World Health Organization (WHO) is exploring multiple options to integrate the emerging technology seamlessly.

This is the latest attempt by the United Nations agency to prioritize AI discussions in its 77th World Health Assembly Strategic Roundtable, an annual meeting of global stakeholders in public health. The roundtable, billed for end of May, is expected to have country representatives, academics, pharmaceutical firms, and policymakers in attendance.

WHO stated that the roundtable will probe several strategies to incorporate AI in public health, driven by the benefits associated with adopting AI. Officials of the organization believe that AI could provide healthcare practitioners with new tools for drug development, diagnoses, administrative tasks and provide solutions to worker shortages in the ecosystem.

The high-level discussions will also entail establishing a global blueprint for digital health and AI with an implementation date of 2030. Stakeholders are expected to explore the provision of technical support for developing countries, resource mobilization, and consensus-building efforts.

“The rapid growth of AI underscores the urgent need for this roundtable, which will drive collaboration to harness AI for health while ensuring a focus on justice and inclusion, and protections for human rights and privacy,” read the announcement.

WHO has since taken a proactive approach toward the technology following the launch of a Smart AI Resource Assistant for Health (SARAH) designed to provide users with information on diseases in the same mold as a chatbot. Prior to SARAH, the agency tested the waters with Florence, an AI chatbot to provide health information for COVID-19, but remains wary of the risks associated with the technology.

To harness the full potential of the technology, the roundtable will fashion out a governance procedure in the coming months while establishing necessary guardrails against data breaches and copyright violations.

“Generative AI technologies have the potential to improve healthcare but only if those who develop, regulate, and use these technologies identify and fully account for the associated risks,” said WHO Chief Scientist Jeremy James Farrar.

Grim risks lurk in the shadows

WHO said emerging technologies possess an element of risk if left to develop without proper guardrails, but risks stemming from AI pose an existential threat. Relying on large language models (LLMs) may produce inaccurate outputs, given their affinity for sycophancy or errors in training data.

Apart from the risks of automation bias, there are fears of discrimination and job losses for clerical staff in health institutions. Specifically, WHO urges member states to develop robust cybersecurity policies to protect patients’ sensitive data from falling into the hands of bad actors in the event of a security breach.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: AI is for ‘augmenting’ not replacing the workforce

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.