11-21-2024
BSV
$66.34
Vol 196.03m
-2.86%
BTC
$97364
Vol 115930.52m
3.49%
BCH
$481.62
Vol 2120.73m
8.49%
LTC
$88.35
Vol 1414.47m
4.31%
DOGE
$0.38
Vol 10045.97m
0.83%
Getting your Trinity Audio player ready...

The U.S. Department of Commerce (DOC) has launched a new consortium to develop methods for evaluating AI systems to improve safety.

Through the National Institute of Standards and Technology (NIST), the department is calling on interested participants to join the Artificial Intelligence (AI) Safety Institute Consortium.

The consortium is NIST’s response to President Joe Biden’s executive order on safe and secure AI. The first of its kind on AI, the order seeks to protect consumer privacy, advance equal rights, and create new security standards for AI.

In his order, President Biden directed NIST to formulate an AI risk management framework and offer guidance on authenticating human-created content at a time when AI systems are getting scary good. NIST is also charged with creating a benchmark for auditing AI capabilities and creating AI test environments. The agency says that the new consortium will be central to these efforts.

“The U.S. AI Safety Institute Consortium will enable close collaboration among government agencies, companies and impacted communities to help ensure that AI systems are safe and trustworthy,” commented NIST Director Laurie E. Locascio.

“Together we can develop ways to test and evaluate AI systems so that we can benefit from AI’s potential while also protecting safety and privacy.”

NIST has been one of the agencies at the forefront of offering guidance on AI. In January, it published the AI Risk Management Framework to guide developers in managing the risks of AI.

However, as with many other efforts in the U.S., the framework is voluntary and lacks an enforcement mechanism.

This could change soon. This week, Senators Mark Warner (D-Va.) and Jerry Moran (R-Kan) presented a draft bill to the Senate that gives the Biden administration some bite on AI enforcement. The bill elevates the role of NIST in regulating AI, requiring all federal agencies to adhere to its AI safety standards.

“It’s crucial that the federal government follow the reasonable guidelines already outlined by NIST when dealing with AI in order to capitalize on the benefits while mitigating risks,” commented Sen. Warner.

While Biden’s executive order received all the attention and “has the force of law,” according to senior White House officials, the bill would have the most significant impact if adopted.

Even with the new bill, the U.S. still lags behind Europe and some Asian countries in regulating AI. Europe has adopted a stringent approach that restricts AI developers’ scope and focuses on privacy, safety, and security. Asian countries, led by Japan, are more open-minded with their approach, which prioritizes leveraging AI for economic growth.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Does AI know what it’s doing?

Recommended for you

BIT Mining hit with $10M fine over bribery charges
In its previous existence as a casino and sports lottery firm, BIT Mining reportedly paid $2 million in bogus consultation...
November 21, 2024
Donald Trump’s role in the ‘crypto’ boom
Donald Trump pledged to make the United States the "crypto capital of the world." For the first time in nearly...
November 21, 2024
Advertisement
Advertisement
Advertisement