11-21-2024
BSV
$69.44
Vol 213.45m
0.51%
BTC
$98446
Vol 125906.21m
4.43%
BCH
$484.27
Vol 2209.54m
8.78%
LTC
$89.28
Vol 1407.51m
6%
DOGE
$0.38
Vol 9386.61m
1.76%
Getting your Trinity Audio player ready...

Twenty-eight countries worldwide have formed a unified agreement on “the urgent need to understand and collectively manage potential risks” of artificial intelligence (AI) technologies. The statement, titled the Bletchley Declaration on AI safety, kicked off a two-day summit at Bletchley Park in the United Kingdom to discuss the issues. Representatives from the United States, China, the European Union, Africa, Asia, and South America attended.

The Bletchley Declaration arrived simultaneously with the Biden administration’s Executive Order on safe, secure, and trustworthy AI this week. It also contained many similar concerns over the potentially catastrophic risks of unregulated AI development (intentional and unintentional), calling for a better understanding of “frontier AI risks” through further collaboration between industry, government, academia, and other stakeholders.

These risks are “best addressed through international cooperation,” the declaration said. As well as the initial summit, there will be follow-up meetings starting with a “mini virtual summit” in six months co-hosted by South Korea and another in-person event in France a year from now.

The U.K. government also announced its program last week, establishing the world’s first AI Safety Institute to complement other actions from bodies like the G7, OECD, United Nations, and the Council of Europe.

The declaration is “a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI – helping ensure the long-term future of our children and grandchildren,” Prime Minister Rishi Sunak said.

The declaration and summit venue are symbolic: the Bletchley Park estate is the birthplace of computing in the U.K., home to Government Code and Cypher School (GC&CS), where Alan Turing and his codebreaking team cracked German military encryption in WWII. Bletchley Park teams’ work also contributed to constructing the world’s first programmable digital electric computer, “Colossus,” from 1943-45.

It remains to be seen how cooperative the diverse range of nations participating at the Bletchley Park summit will be. In a decade where national rivalries and alliances have morphed into outright hostility and kinetic battles, it’s easy to see how “cooperation and collaboration” could end up an empty, superficial gesture as regions compete for future dominance. The U.S. has made no secret of its intentions to dominate the AI development field, placing restrictions on exports of AI chips to China. China will likely counter these moves by increasing confidentiality around domestic AI development rather than sharing it with other potentially unfriendly countries.

Let’s have auditable datasets on the blockchain, at least

The Declaration, Joe Biden’s Executive Order, and statements from other governments on AI risks and safety this week were relatively broad and general in content. They mark the beginning of a process rather than the final word on AI matters; therefore, specific technologies and regulatory techniques to manage the risks will no doubt emerge later.

As we’ve mentioned, one of the most critical aspects of AI development is its human-sourced inputs—the datasets used in training machine learning systems. Transparency in what kind of data is being used, by whom, and how is paramount. Blockchain-based and tokenized datasets on a fast and scalable blockchain are the best way to keep this process functional and auditable.

More prominent corporate players in the AI space, like Meta (NASDAQ: META), Google (NASDAQ: GOOGL), Amazon (NASDAQ: AMZN), OpenAI, and Microsoft (NASDAQ: MSFT), have embarked on a kind of self-regulation mission by pledging to ensure their AI products are “safe” through internal and external testing procedures. Again, though, the key is transparency—how do we know what processes exist, and more importantly, how can we guarantee they’re acting in a manner most people and governments consider “responsible”? Open and auditable blockchain records are the best answer to this question.

Bletchley Declaration is a good start, but more action needed

Seán Ó hÉigeartaigh, Program Director of AI: Futures and Responsibilities at Cambridge University’s Centre for the Study of Existential Risk (CSER), cautiously welcomed the Bletchley Declaration. Although it was promising to see so many countries from different regions acknowledging the need to develop AI technologies responsibly, he noted “many key aspects of AI that remain under-defined.”

“I was pleased to see a call for transparency and accountability from these actors on their plans to monitor potentially harmful capabilities. The AI safety policies released by six leading companies last week represented a good step in this direction. However, our analysis found that they were still lacking in terms of key detail. It is crucial that the Declaration lead to further focus on developing these policies, with appropriate external oversight by academia, civil society, and government.” Seán Ó hÉigeartaigh said.

The real test, he added, would be if the governments of countries that joined the declaration would follow up with genuine “cooperation on the concrete governance steps that need to follow.”

As its name suggests, CSER is an interdisciplinary research center dedicated to the study and mitigation of largely unstudied existential risks. It works in collaboration with industry, policymakers, and academics to weigh the ups and downsides of emerging technologies and human activity, hoping to balance progress with responsible actions that have the potential for “catastrophic pitfalls.”

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: AI & blockchain will be extremely important—here’s why

Recommended for you

BIT Mining hit with $10M fine over bribery charges
In its previous existence as a casino and sports lottery firm, BIT Mining reportedly paid $2 million in bogus consultation...
November 21, 2024
Donald Trump’s role in the ‘crypto’ boom
Donald Trump pledged to make the United States the "crypto capital of the world." For the first time in nearly...
November 21, 2024
Advertisement
Advertisement
Advertisement