National Institute of Standards and Technology campus entrance

NIST launches AI Safety Institute as new draft bill aims to bolster agency’s AI mandate

The U.S. Department of Commerce (DOC) has launched a new consortium to develop methods for evaluating AI systems to improve safety.

Through the National Institute of Standards and Technology (NIST), the department is calling on interested participants to join the Artificial Intelligence (AI) Safety Institute Consortium.

The consortium is NIST’s response to President Joe Biden’s executive order on safe and secure AI. The first of its kind on AI, the order seeks to protect consumer privacy, advance equal rights, and create new security standards for AI.

In his order, President Biden directed NIST to formulate an AI risk management framework and offer guidance on authenticating human-created content at a time when AI systems are getting scary good. NIST is also charged with creating a benchmark for auditing AI capabilities and creating AI test environments. The agency says that the new consortium will be central to these efforts.

“The U.S. AI Safety Institute Consortium will enable close collaboration among government agencies, companies and impacted communities to help ensure that AI systems are safe and trustworthy,” commented NIST Director Laurie E. Locascio.

“Together we can develop ways to test and evaluate AI systems so that we can benefit from AI’s potential while also protecting safety and privacy.”

NIST has been one of the agencies at the forefront of offering guidance on AI. In January, it published the AI Risk Management Framework to guide developers in managing the risks of AI.

However, as with many other efforts in the U.S., the framework is voluntary and lacks an enforcement mechanism.

This could change soon. This week, Senators Mark Warner (D-Va.) and Jerry Moran (R-Kan) presented a draft bill to the Senate that gives the Biden administration some bite on AI enforcement. The bill elevates the role of NIST in regulating AI, requiring all federal agencies to adhere to its AI safety standards.

“It’s crucial that the federal government follow the reasonable guidelines already outlined by NIST when dealing with AI in order to capitalize on the benefits while mitigating risks,” commented Sen. Warner.

While Biden’s executive order received all the attention and “has the force of law,” according to senior White House officials, the bill would have the most significant impact if adopted.

Even with the new bill, the U.S. still lags behind Europe and some Asian countries in regulating AI. Europe has adopted a stringent approach that restricts AI developers’ scope and focuses on privacy, safety, and security. Asian countries, led by Japan, are more open-minded with their approach, which prioritizes leveraging AI for economic growth.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Does AI know what it’s doing?

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.