BSV
$46.79
Vol 19.26m
-0.09%
BTC
$68819
Vol 35244.32m
0.28%
BCH
$338.31
Vol 275.42m
1.37%
LTC
$66.98
Vol 322.73m
0.91%
DOGE
$0.16
Vol 3654.39m
11.49%
Getting your Trinity Audio player ready...

As artificial intelligence (AI) continues to make headlines, it has increasingly become a topic of discussion for organizations worldwide. Recently, The World Economic Forum (WEF) published its Global Risk Report 2024, and AI appeared on many of its pages. The WEF does acknowledge that AI can have a positive impact, but it primarily focuses on the risks that AI presents in a global framework.

Generative AI and global risks

The report states that one of the most pressing concerns is the proliferation of misinformation and disinformation from generative AI. Generative AI, especially platforms that allow users to create images, videos, and audio, have made it very easy to create and spread false content that looks and sounds legitimate. The WEF fears that if AI-induced misinformation and disinformation persist, it will lead to several people experiencing human rights violations​.

One group significantly at risk of this taking place is those working in government. People in the government and other high-profile individuals are at high risk of being the target/victim of manipulative campaigns that were created with AI-generated content. The WEF is concerned that these sorts of manipulative campaigns could destabilize political systems and global markets, as well as induce internal conflicts and terrorism and strain international relations​​. Although law enforcement officials are aware of these problems, The WEF reports that the pace of regulatory development has not been able to keep up with the evolution of the AI models that allow users to make this kind of content.

The report also addresses the problems that could come from AI production being highly centralized within a handful of companies and nations. The WEF believes this can create vulnerabilities due to the world having an overreliance on a limited set of AI models and cloud providers. The WEF says this could lead to cybersecurity risks that end up affecting infrastructure across the world.

Beyond the issues that come with a highly centralized system, the fact that generative AI platforms are widely accessible to everyone with an internet connection means there is a broad range of ways AI can be misused, with malicious actors leveraging AI systems to create misinformation, cyberattacks, and weapons.

AI’s economic and military disparities

The WEF flags up the potential AI has to increase the amount of inequality in the world. They say that AI is expected to create winners and losers across different economies, with low-income countries being left behind. The WEF believes that countries that are better positioned to create and use AI systems will end up influencing global power dynamics, which could lead to significant variations in economic productivity, job creation, and access to healthcare and education across different economies​​​.

But those inequalities go beyond economic opportunities and education. The WEF has concerns about how AI-induced inequality will affect military powers. The WEF believes that AI will lead to imbalances in the capabilities of autonomous weapons systems and nuclear weaponry, and that if the threat that AI poses in the defense sector is not taken into account or properly managed it could have negative effects on society.

How do we solve these AI problems?

At the heart of the WEF’s report is the idea that AI tools are becoming widespread, yet there have not been many public awareness programs around AI or other measures to mitigate the risk associated with AI systems.

Many of the items that the WEF acknowledges as risks created by AI systems are about individuals in the world who don’t have access to AI systems being disadvantaged because they are likely to be left behind as other economies adopt this technology and use it to advance rapidly. The other half of the risk they believe exists comes from the fear that a lack of public education and regulation will end up harming individuals as bad actors take advantage of new technologies to commit and facilitate crimes in ways that the general public, and in some cases, even law enforcement, has never seen before.

To solve these issues, the report emphasizes the importance of global cooperation when it comes to addressing AI. It calls for a united approach to AI governance involving public and private sectors worldwide.

To combat misinformation and disinformation, the report suggests that safeguards be put into place on generative AI platforms, such as watermarks and hidden watermarks on content generated by an AI system, so that individuals can quickly conclude that what they are viewing is legitimate or counterfeit.

Lastly, the report calls for efforts to increase AI literacy among regulators and society to address some challenges and make society more resistant to AI-related risks. If individuals were aware of what AI cybercrime, misinformation/disinformation campaigns, and other AI scams looked like, they would be in a better position to avoid and report AI misconduct.

On a global scale, there is no easy way to address these problems; many individual pieces–at the local, state, and federal levels–go into creating the full picture of AI in a global context. Unfortunately, most of these governing bodies are reactive rather than proactive, addressing the problems created by technology like AI after some sort of devastating event that causes significant damage takes place.

However, the fact that AI, its advantages, and its drawbacks are being discussed by various organizations that influence global decisions is hopefully a step in the right direction in accelerating the creation of the supporting infrastructure that AI needs to lead to optimal outcomes. At the very least, these discussions around AI affirm that artificial intelligence will significantly impact the world and that it is a current trend in technology that people should be educating themselves about.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Cybersecurity fundamentals in today’s digital age with AI & Web3

Recommended for you

Zanzibar launches blockchain sandbox for startups
Zanzibar seeks to support blockchain startups and recently launched a sandbox; meanwhile, Vietnam has launched a national blockchain strategy.
November 5, 2024
FINRA: Metaverse to hit $3T by 2031, but poses regulatory risks
FINRA says it has observed more players in the securities industry diving into the metaverse but warns that they must...
November 4, 2024
Advertisement
Advertisement
Advertisement