Geoffrey Hinton full face

Blockchain can help solve some fears about AI as Geoffrey Hinton leaves Google

Amidst all the excitement about the wave of new AI applications that have taken the world by storm in 2023, there has been a steady chorus of high-profile AI experts warning us about the dangers.

The Godfather of AI, Geoffrey Hinton, recently made headlines when he resigned from Google (NASDAQ: GOOGL) so he could openly express his fears about the potential for AI to create significant harm to the world.

This tells us two things: Google isn’t somewhere dissenting voices can express themselves freely, and the dangers that lie ahead are worrying enough to cause top experts to resign from high-profile jobs to warn us about them.

What are the potential harms of AI?

The first and most apparent harm AI could cause is the widespread displacement of workers. Pretty much everybody knows that especially the artists, writers, and creators of all kinds who have seen the first wave of AI tools do with ease what took them years or decades to learn. Call center workers, customer service reps, and even doctors and lawyers fear what AI might mean for their professions.

Yet, the automation of jobs isn’t anything new, and previous technological breakthroughs were met with the same expressions of concern. While he does mention it as a worry, that isn’t enough in and of itself to get a heavy hitter like Hinton to tell the New York Times he now partly regrets his life’s work.

Hinton’s concerns are more serious; he fears the mass proliferation of fake images, videos, and text online—fake news on steroids. As if the truth isn’t already difficult enough to separate from fiction, the mass influx of images and video almost indistinguishable from reality will amplify this problem a hundred-fold. He also fears that, in the future, AI systems will learn how to manipulate human beings and will learn new behaviors from the mass volume of data it is trained on.

Hinton is just one of many experts who have warned us about AI’s potentially harmful side effects in recent months. “Look at how it was five years ago and how it is now. Take that difference and propagate it forward,” he told the NYT.

How can blockchain technology help mitigate some of the dangers?

As I said in my article on the most common blockchain myths, this technology couldn’t solve every problem and certainly won’t stop all of the potentially negative consequences of AI. Blockchain technology can’t stop the automation of jobs, and it can’t stop a rogue AI developed in some black box from turning into Skynet and destroying the human race.

However, blockchain can help in two critical areas: verifying data as authentic, including images, text, and video, and creating accountability for AI developers and researchers.

Verifying information from official sources

First, blockchain can make it possible to verify that information is from a legitimate source and that it is valid. Let’s imagine a video that appears on the internet claiming to be an official Whitehouse statement on a given topic. It begins to spread like wildfire, and there’s no way to tell if it’s fake just by looking.

Blockchain technology can help verify or debunk such videos by enabling governments, companies, and others to cryptographically sign official documents and media, making it possible to immediately verify authenticity. I’m not saying most people will bother to do so, but journalists and those who want to can instantly check for, e.g., an official Whitehouse signature to verify that a given piece of information is genuine.

Due to the sheer volume of misinformation likely to be disseminated as a result of AI, it will be impossible to address each piece on a case-by-case basis. Some sort of system will be required to allow interested parties to verify authenticity quickly and easily, and blockchains are an ideal tool for this.

Corporations can also use blockchain technology to verify the authenticity of songs, movies, and other media that claim to be produced by them. Just recently, an AI-generated song by Drake made the rounds, sending executives from Universal Music Group scrambling to clarify it wasn’t an official release and issue legal threats.

Creating accountability in the world of AI

At least two potential parties will need to be held accountable in the age of AI; those who create and propagate fake information and AI engineers and developers who develop systems that could wreak havoc upon humanity and/or who blatantly steal data from creators to train their systems. Blockchain technology can help in both cases.

First, one of the major problems with the internet and the proliferation of fake information today is that there’s no way to tell who created or uploaded it. This is partly due to anonymous accounts on platforms like Twitter.

Blockchain can change that, making it possible to know who really lies behind pseudonyms on social media platforms and who is responsible for spreading false information. While the public doesn’t necessarily have to know who is behind a given account, it would be both possible and good for authorities to know in cases where the law is broken.

If you object to this on political grounds, think about a scenario in which someone generates fake porn of someone in your family, or another loved one and uploads it to social media. Would you want someone to be able to trace and identify who uploaded it first and hold them accountable?

Blockchain can also make it possible to track and trace who does what in a given system. What if AI models were open source, or if governments made laws saying companies could only develop their private AI systems if all development was tracked on a blockchain, making it possible to tell who made changes that led to disastrous consequences so they could be held to account? Time-stamp servers like blockchains are ideal for this, and tools like Sentinel Node show how it’s possible to keep track of what happens inside systems using blockchain.

Likewise, blockchain tech allows artists and creators to own their own data. Whereas AI systems like ChatGPT and Midjourney have been able to train on web pages and images without the permission of their owners, using blockchains like BSV would put the owners of the data in control, making it possible for them to either grant or decline permission to AI developers who want to train their models on their work. Creators could also receive payment for it if they decide to allow this. Learn more about micro and nano payments to understand how this could work.

Of course, before any of this is possible, blockchain technology will have to scale to billions of transactions per second, systems and applications will have to be built, and relevant laws and regulations will have to be passed to define what is and is not legal concerning AI. That’s another matter for another day, and we’ll have to see how it all plays out.

For now, blockchain can be one positive thing that helps to solve some of these all-to-real problems headed our way. While it can’t stop an AI overlord from sending an army of robots into your city and wiping everyone out, it can make it possible to distinguish what’s real from what’s fake. It can create some much-needed accountability for all involved. That’s a start, at least!

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

CoinGeek Weekly Livestream: The future of AI Generated Art on Aym

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.