BSV
$56.24
Vol 50.22m
-11.44%
BTC
$100597
Vol 114510.78m
-5.24%
BCH
$481.62
Vol 595.6m
-9.86%
LTC
$108.68
Vol 1996.56m
-12.45%
DOGE
$0.36
Vol 6102.97m
-8.37%
Getting your Trinity Audio player ready...

The United Nations (UN) Security Council will hold an inaugural meeting to analyze the threats of artificial intelligence (AI) to global peace and security.

In disclosing the plans, U.K. Ambassador to the UN Barbara Woodward said the purpose of the meeting would be to cross-pollinate ideas on guardrails for safe AI usage. On July 1, the U.K. assumed the Security Council’s presidency, a position it is set to occupy until the end of the month.

The AI-themed meeting, chaired by U.K. Foreign Secretary James Cleverly, will feature presentations from AI researchers on the potential threats AI poses to international security. UN Secretary-General Antonio Guterres had previously noted that if AI innovations continue without proper regulatory frameworks, it could become “an existential threat to humanity on a par with the risk of nuclear war.”

Experts have warned that AI could be deployed in autonomous weaponry or may facilitate the rogue takeover of nuclear weapons. The spread of misinformation by generative AI platforms may trigger a crisis and reduce trust in the electoral process in developing countries.

The U.K. hopes to use its term at the helm of the Security Council to promote a “multilateral approach” to AI regulation. Woodward remarked that despite the grave risks posed by AI, there is a range of positives to be gleaned from the technology, including assisting peacekeeping operations and aid operations.

“It could potentially help us close the gap between developing countries and developed countries,” Woodward remarked.

The U.K. has seized the initiative in AI regulation with the launch of an AI task force in April, splurging over $100 million to ensure the safe usage of the technology. In June, U.K. Prime Minister Rishi Sunak announced that the country had secured priority access from leading AI developers for their future products.

Walking a dangerous precipice

Attempts to develop a robust AI framework have resulted in a staccato approach across multiple jurisdictions. In the European Union (EU), regulators have scaled a major legislative hurdle, but the delay in its operation has cast doubt over the immediate future of AI risks to Web3, mass media, and user privacy.

In the U.S., Congress is pushing for labeling AI-generated content while enlisting the help of social media platforms to flag down non-compliant posts. While several technologists call for a moratorium on AI development to allow regulations to catch up, others are banding up to take swipes at the EU for stifling AI innovation.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch AI Summit PH 2023: Philippines is ripe to start using artificial intelligence

Recommended for you

El Salvador softens BTC stance as economic reality bites
Nayib Bukele’s government has agreed to walk back its pro-BTC stance to secure a $1.3 billion IMF loan, saying that...
December 18, 2024
Ripple launches stablecoin; Tether invests in EU lifeboats
Ripple says choosing NYDFS for its newly minted RLUSD will help increase the token's acceptance. Elsewhere, Tether continues to look...
December 18, 2024
Advertisement
Advertisement
Advertisement