Rwanda flag blue sky cloudy white background

Rwanda pushes for safe and responsible AI development

Getting your Trinity Audio player ready...

Rwanda has launched a national artificial intelligence (AI) policy to guide local companies in developing safe and responsible AI.

The policy outlines the East African nation’s vision for AI, says senior ICT Ministry official Victor Muvunyi. For Rwanda, inclusivity and ethical deployment are the guiding principles as the country seeks to leverage AI to better people’s lives. The policy also pushes Rwandan companies to use AI to address the unique challenges its people face.

In addition to the policy, the ICT Ministry has established a national AI office whose mandate includes ensuring local companies implement the technology “responsibly and effectively.”

“This office will help guide our AI [development] journey, addressing challenges and fostering innovation while keeping our cultural and ethical values at the forefront,” Muvunyi told local newspaper The New Times.

AI adoption has accelerated in African nations in recent years as the region seeks to keep stride with its peers. Some like Morocco are using the technology in its courts to conduct research and retrieve archived texts. Others like South Africa and Kenya are using it to solve specific challenges facing the continent, such as climate modeling to help farmers plan better.

However, Africa faces greater hurdles in adopting AI than other regions. Challenges such as insufficient structured data ecosystems, skills deficiency, poor infrastructure and limiting policies have impeded AI development.

Rwanda wants to help AI companies mitigate these challenges, Muvunyi stated. In addition to the new AI office, it’s relying on the Rwanda Utilities Regulatory Authority to promote AI development. The Authority also pushes AI principles that keep these companies honest and protect the rights of the people.

“The principles include beneficence and non-maleficence among others that ensure that AI systems not only benefit society but also protect human dignity and prevent harm,” Muvunyi told the outlet.

AI safety is a global issue. Governments in the United States, the European Union, the United Kingdom and Asia have been pushing AI developers to commit to prioritizing safety when developing the technology.

Last month, Google (NASDAQ: GOOGL), Meta (NASDAQ: META) and OpenAI were among the industry leaders that made a fresh pledge in Seoul to prioritize safety when developing AI. Last year, these companies had made a similar pledge to the Biden government.

Yet another challenge facing the sector is data privacy. Companies like Meta and OpenAI have landed in legal trouble for disregarding data laws when training and deploying their AI models.

In Rwanda, the country’s Data Protection and Privacy Law has become critical to protecting the public in the face of aggressive AI development. The law, which took effect in 2021, mandates companies to obtain consent from citizens before using their data and to be transparent in its handling and storage.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Artificial intelligence needs blockchain

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.