BSV
$63.98
Vol 109.38m
0.17%
BTC
$88010
Vol 91062.59m
-1%
BCH
$422.62
Vol 693.32m
-1.96%
LTC
$82.25
Vol 1867.29m
10.9%
DOGE
$0.38
Vol 17525.64m
2.76%
Getting your Trinity Audio player ready...

The U.S. Department of Commerce (DOC) has launched a new consortium to develop methods for evaluating AI systems to improve safety.

Through the National Institute of Standards and Technology (NIST), the department is calling on interested participants to join the Artificial Intelligence (AI) Safety Institute Consortium.

The consortium is NIST’s response to President Joe Biden’s executive order on safe and secure AI. The first of its kind on AI, the order seeks to protect consumer privacy, advance equal rights, and create new security standards for AI.

In his order, President Biden directed NIST to formulate an AI risk management framework and offer guidance on authenticating human-created content at a time when AI systems are getting scary good. NIST is also charged with creating a benchmark for auditing AI capabilities and creating AI test environments. The agency says that the new consortium will be central to these efforts.

“The U.S. AI Safety Institute Consortium will enable close collaboration among government agencies, companies and impacted communities to help ensure that AI systems are safe and trustworthy,” commented NIST Director Laurie E. Locascio.

“Together we can develop ways to test and evaluate AI systems so that we can benefit from AI’s potential while also protecting safety and privacy.”

NIST has been one of the agencies at the forefront of offering guidance on AI. In January, it published the AI Risk Management Framework to guide developers in managing the risks of AI.

However, as with many other efforts in the U.S., the framework is voluntary and lacks an enforcement mechanism.

This could change soon. This week, Senators Mark Warner (D-Va.) and Jerry Moran (R-Kan) presented a draft bill to the Senate that gives the Biden administration some bite on AI enforcement. The bill elevates the role of NIST in regulating AI, requiring all federal agencies to adhere to its AI safety standards.

“It’s crucial that the federal government follow the reasonable guidelines already outlined by NIST when dealing with AI in order to capitalize on the benefits while mitigating risks,” commented Sen. Warner.

While Biden’s executive order received all the attention and “has the force of law,” according to senior White House officials, the bill would have the most significant impact if adopted.

Even with the new bill, the U.S. still lags behind Europe and some Asian countries in regulating AI. Europe has adopted a stringent approach that restricts AI developers’ scope and focuses on privacy, safety, and security. Asian countries, led by Japan, are more open-minded with their approach, which prioritizes leveraging AI for economic growth.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Does AI know what it’s doing?

Recommended for you

Trump’s Cabinet pushes token prices to the moon…and Mars
Aiming to dismantle government bureaucracy, Donald Trump teams up with Elon Musk in setting up D.O.G.E., a new department seen...
November 14, 2024
AI ethics & blockchain: Balance between data utilization & privacy
Becky Liggero moderated the Ethical AI and Blockchain panel at the AI & Blockchain Virtual Expo, which discussed different perspectives...
November 14, 2024
Advertisement
Advertisement
Advertisement