11-21-2024
BSV
$67.8
Vol 210.88m
-1.31%
BTC
$97902
Vol 122368.63m
3.65%
BCH
$479.56
Vol 2189.82m
8.61%
LTC
$88.41
Vol 1369.56m
4.97%
DOGE
$0.38
Vol 9428.8m
1.85%
Getting your Trinity Audio player ready...

The artificial intelligence (AI) industry is a fast-moving space. Innovators, governments, and everyone in between currently have their attention fixed on the products and services that AI fuels. In many cases, AI can significantly improve business operations and our quality of life, but no revolutionary technology comes without risks that have the potential to cause harm.

Here are a few significant events that took place in AI last week:

The United Nations adopts resolution on AI

Last week, the United Nations adopted a United States-led draft resolution on artificial intelligence.

The document, titled “Seizing the opportunities of safe, secure, and trustworthy artificial intelligence systems for sustainable development,” focuses on leveraging AI systems to achieve sustainable development goals while ensuring they are developed and used in ways that are human-centric, reliable, ethical, inclusive, and in respect of human rights and international law.

The resolution had guidelines regarding how to bring that vision to fruition, such as emphasizing the need for global consensus and frameworks related to AI, urging member states to share best practices on data governance, and encouraging international cooperation for advancing trusted cross-border AI development.

However, the draft did not have many, if any, tangible steps to enforce the resolution. This is a recurring problem when it comes to AI policy, especially when it is created by global organizations like the UN. There typically aren’t many ways to enforce these laws and regulations unless an entity blatantly violates them.

Microsoft hires Google DeepMind co-founder

Microsoft (NASDAQ: MSFTcontinues advancing its AI position with its two latest hires, Google DeepMind (NASDAQ: GOOGL) Co-Founder Mustafa Suleyman and Karén Simonyan, chief scientist and co-founder of Inflection AI. Both will join Microsoft to form a new organization, Microsoft AI, that focuses on advancing Copilot and Microsoft’s other consumer AI products and research.

Microsoft reports that many members of the Inflection AI team have followed both Suleyman and Simonyan over to Microsft AI. Inflection AI was a competitor to OpenAI, a company in which Microsoft is deeply invested.

With Inflection AI losing many of its key employees to Microsoft AI, Microsoft is putting itself in a position to grab more of the market share in the AI space.

Protecting entertainers from AI

Tennessee is now the first state in the United States to protect artists from the challenges they face due to artificial intelligence. Governor Bill Lee has signed legislation protecting music industry professionals, including songwriters and performers, from others using artificial intelligence to impersonate them.

“We employ more people in Tennessee in the music industry than any other state,” Lee told reporters shortly after signing the bill into law. “Artists have intellectual property. They have gifts. They have a uniqueness that is theirs and theirs alone, certainly not artificial intelligence.”

Effective July 1, the new law introduces a new civil action that holds individuals accountable for unauthorized use of an artist’s name, photographs, voice, or likeness.

Tennessee’s bill may serve as a precedent for other states, not only when it comes to protecting the rights of entertainers, but also when it comes to enforcing the law on individuals who use generative AI to impersonate another person. Early this year, we saw incidents take place in which generative AI was used to create convincing audio that impersonated President Joe Biden.

As AI systems continue to evolve, their capabilities, including their ability to impersonate others, will only improve, which is why many companies are trying to come up with ways to quickly and easily identify content created or altered with artificial intelligence.

Department of Homeland Security embraces AI

The Department of Homeland Security (DHS) has unveiled its first-ever AI roadmap; the DHS AI Roadmap for 2024 outlines the organizations’ strategic initiatives for using AI to support homeland security while addressing potential risks and ensuring responsible use of the technology.

The DHS has been using AI for many years for border security, cybersecurity, and disaster response, but it outlines how the organization will continue and plans to use AI when it comes to identity verification, drug detection, and aiding law enforcement in crime investigations. At the same time, the DHS recognizes that the increase and evolution of AI tools present new challenges in cybersecurity and misinformation.

AI developments summary: Government policies and tech giants’ strategies

Last week’s developments captured the multi-faceted narratives around AI that we frequently see in the headlines: government organizations creating, or trying to create, policies around the technology to ensure that it is being ethically and responsibly implemented worldwide. We also see organizations releasing guidelines and roadmaps that have detailed information about how they have used AI in the past, how they will use AI in the future, and the risks associated with AI that fall under their jurisdiction. This surge in policy-making activity most likely has two primary drivers: (1) different nations around the world trying to position their organizations as entities at the forefront of the rapidly evolving technology landscape, and (2) in the United States in particular, the need to align with the President Biden’s executive order on artificial intelligence.

On the other hand, we have the action taking place within the industry itself, with tech giants like Microsoft continually trying to strategically position themselves in a way that allows them to establish themselves as one of the few leaders in the AI arena. But given that the focus has lately been on governmental strategies, legislative measures, and company strategies, we may be approaching a point where we shift towards more developments in AI that directly impact consumer products and services.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: AI Forge masterclass: Why AI & blockchain are powerhouses of technology

Recommended for you

BIT Mining hit with $10M fine over bribery charges
In its previous existence as a casino and sports lottery firm, BIT Mining reportedly paid $2 million in bogus consultation...
November 21, 2024
Donald Trump’s role in the ‘crypto’ boom
Donald Trump pledged to make the United States the "crypto capital of the world." For the first time in nearly...
November 21, 2024
Advertisement
Advertisement
Advertisement