NATO’s AI strategy

NATO’s AI strategy could establish responsible use of the technology, AI policy chief says

Getting your Trinity Audio player ready...

The North Atlantic Treaty Organization (NATO) has announced a new artificial intelligence (AI) strategy focusing on responsible usage to prevent misuse by state and non-state bad actors.

The new AI strategy is an upgrade from the existing playbook from 2021, which revolved around protecting member states from cybercriminals and promoting safe use by citizens within borders. The updated strategy brings NATO’s guiding AI principles to six, representing the current realities of emerging technologies and their adoption.

NATO’s AI policy chief, Nikos Loutas, confirmed the new blueprint at the London Artificial Intelligence Summit with key industry stakeholders and policymakers in attendance. Gleaning from his speech at the summit, the military alliance will focus on lawfulness, accountability, and responsibility, giving a measure of liability to AI developers.

New additions to existing principles include traceability, explainability, reliability, and bias mitigation to prevent discrimination and ensure the accuracy of AI-generated content. Lastly, the addition of a governability principle is expected to bring AI developers and their models within the purview of governments on both sides of the Atlantic.

To ensure strict adherence to the six principles, NATO floated a new Data and AI Review Board comprising representatives of member states with a strong AI background and industry players. A major responsibility of the newly minted Board is to translate the principles from a theoretical view to real-world applications.

“The Board creates practical Responsible AI toolkits, guides Responsible AI implementation in NATO and supports Allies in their Responsible AI efforts,” read a NATO statement.

It appears that the Board will perform other important functions, including rolling out NATO’s AI rules for member states and regulating the exchange of information between countries. In a strong show of strength, the Board launched an AI certification standard for institutions in the alliance to ensure similar standards align with its values and international law.

Loutas disclosed that NATO will keep a keen eye on AI developments by its competitors to maintain “technological superiority.” The AI policy head noted that a lackluster approach by the alliance could result in grave consequences for member states, including the AI-guided missile attacks from adversaries.

A collaborative effort at reining the technology

Aside from NATO’s push for safe AI systems, the United Nations Security Council has confirmed plans to lay down ground rules for safe and responsible AI. The body has since termed the emerging technology as “an existential threat to humanity on a par with the risk of nuclear war,” prompting a push for regulation.

“Generative AI can be a tremendous opportunity for human development, but it can also cause harm and prejudice,” said UNESCO Director-General Audrey Azoulay. “It cannot be integrated into education without public engagement and the necessary safeguards and regulations from governments.

The EU has also adopted a collaborative stance for AI regulations, while the U.K.’s Bletchley Declaration is considered a “big win” for policing developers by authorities.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Improving logistics, finance with AI & blockchain

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.