Pope Francis

Pope Francis calls for ethical AI development

Pope Francis has voiced his concerns about the ethical development and use of artificial intelligence (AI). This move closely follows the European Union’s recent movements toward increased AI regulations.

The Pope released a message on Thursday emphasizing the necessity for an international treaty to guide the ethical development of AI, highlighting the risks associated with technology “devoid of human values.”

The Pope’s message, sent to heads of state and world bodies, was titled “Artificial Intelligence and Peace”, and it warned against “technological dictatorship” that threatens human existence.

There was an emphasis on the military applications of AI, with the Pope expressing concerns about the potential use of AI in the armaments industry and the negative impact it could have on humanity and the earth.

AI advances come with AI challenges

The Pope’s focus on AI is not arbitrary; it is a response to AI’s rapid proliferation and impact in 2023. While AI advancements have arguably brought mostly positive changes, their potential for negative impact, particularly on public figures like the Pope, cannot be ignored.

The spread of realistic deepfake technology—which the Pope fell victim to earlier this year when an AI-generated image of him dressed in a white puffer jacket went viral—demonstrates the capabilities of realistic AI imagery and underscores some of the AI-created issues the world now experiences.

AI-generated image of Pope Francis in a white puffer jacket

These deepfakes present a unique challenge, especially for world leaders, as they can be used to misrepresent and manipulate public perception and discourse.

Companies like Facebook (NASDAQ: META) have been creating methods to counter these issues. By incorporating watermarks and invisible watermarks into AI-generated content, Meta aims to enable easy identification of AI-generated content through code reviews.

Balancing innovation and protection in the AI landscape

It’s clear that AI regulation is at the forefront of global discourse among world leaders and policymakers. AI began experiencing its mass adoption moment at the end of 2022 and into 2023, but new technologies come with new challenges.

Savvy criminals often exploit these innovations faster than law enforcement and regulatory bodies can adapt, which typically leads to a surge in new crime shortly after the world enters an era where a tech platform begins to be rapidly built.

Regulation and policy formation are crucial, but there needs to be balance when creating law and regulation. Over-regulation could stifle innovation and competitiveness. This concern is particularly relevant in the context of the European Union’s AI Act, which might run the risk of hindering the EU’s competitive edge in the global AI arena.

2023 has been a landmark year for AI, and regulation tends to follow innovation, reacting to new technologies rather than preemptively guiding their development. As we move forward, the challenge for global leaders and policymakers will be to craft regulations that mitigate risk while fostering an environment conducive to innovation and growth.

While the Pope’s message and the EU’s AI Act highlight the risks associated with AI, it is crucial to ensure that these regulations do not stifle innovation, especially in the areas where perceived threats of AI are low risk.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Artificial intelligence needs blockchain

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.