11-21-2024
BSV
$71.19
Vol 167.79m
2.44%
BTC
$97570
Vol 113523.42m
3.46%
BCH
$493.54
Vol 2011.96m
10.58%
LTC
$89.98
Vol 1365.47m
4.19%
DOGE
$0.38
Vol 10730.9m
-0.36%
Getting your Trinity Audio player ready...

The U.S. Department of Commerce (DOC) has launched a new consortium to develop methods for evaluating AI systems to improve safety.

Through the National Institute of Standards and Technology (NIST), the department is calling on interested participants to join the Artificial Intelligence (AI) Safety Institute Consortium.

The consortium is NIST’s response to President Joe Biden’s executive order on safe and secure AI. The first of its kind on AI, the order seeks to protect consumer privacy, advance equal rights, and create new security standards for AI.

In his order, President Biden directed NIST to formulate an AI risk management framework and offer guidance on authenticating human-created content at a time when AI systems are getting scary good. NIST is also charged with creating a benchmark for auditing AI capabilities and creating AI test environments. The agency says that the new consortium will be central to these efforts.

“The U.S. AI Safety Institute Consortium will enable close collaboration among government agencies, companies and impacted communities to help ensure that AI systems are safe and trustworthy,” commented NIST Director Laurie E. Locascio.

“Together we can develop ways to test and evaluate AI systems so that we can benefit from AI’s potential while also protecting safety and privacy.”

NIST has been one of the agencies at the forefront of offering guidance on AI. In January, it published the AI Risk Management Framework to guide developers in managing the risks of AI.

However, as with many other efforts in the U.S., the framework is voluntary and lacks an enforcement mechanism.

This could change soon. This week, Senators Mark Warner (D-Va.) and Jerry Moran (R-Kan) presented a draft bill to the Senate that gives the Biden administration some bite on AI enforcement. The bill elevates the role of NIST in regulating AI, requiring all federal agencies to adhere to its AI safety standards.

“It’s crucial that the federal government follow the reasonable guidelines already outlined by NIST when dealing with AI in order to capitalize on the benefits while mitigating risks,” commented Sen. Warner.

While Biden’s executive order received all the attention and “has the force of law,” according to senior White House officials, the bill would have the most significant impact if adopted.

Even with the new bill, the U.S. still lags behind Europe and some Asian countries in regulating AI. Europe has adopted a stringent approach that restricts AI developers’ scope and focuses on privacy, safety, and security. Asian countries, led by Japan, are more open-minded with their approach, which prioritizes leveraging AI for economic growth.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Does AI know what it’s doing?

Recommended for you

Donald Trump’s role in the ‘crypto’ boom
Donald Trump pledged to make the United States the "crypto capital of the world." For the first time in nearly...
November 21, 2024
India Web3 space sees Trump influencing ‘crypto’ regulation
The Indian Web3 industry is celebrating Donald Trump's re-election, acknowledging that his pro-digital currency outlook could influence global sentiment and...
November 21, 2024
Advertisement
Advertisement
Advertisement