BSV
$65
Vol 169.81m
18.18%
BTC
$91602
Vol 138808.92m
4.19%
BCH
$437.87
Vol 983.81m
5.73%
LTC
$80.58
Vol 1767.97m
9.36%
DOGE
$0.39
Vol 26362.73m
1.87%
Getting your Trinity Audio player ready...

AI and blockchain go hand in hand. This match made in heaven becomes increasingly obvious as we explore the good and bad opportunities with booming AI-powered tech. To dive deeper into this subject, which our Founder is also passionate about, I moderated a panel on Ethical AI and Blockchain during the AI & Blockchain Virtual Expo 2024 on October 31. Here is a summary of what went down in case you missed it.

The panelists spanned a broad spectrum of experience, everything from technical and research to association to corporate to health care. Their different perspectives brought fascinating insights into the hot topic of AI ethics and how we can prevent and solve the major concerns that surround it. The group included Wei Zhang, Research Director of nChain; Kristopher Klaich, Policy Director of Chamber of Digital Commerce; Kerstin Eimers, Global Web3 of Deutsche Telekom; and Daniel Pietrzykoski, Technical Strategy & Product Innovation – Blockchain & Program Management of Johnson & Johnson.

The discussion kicked off with what AI ethics means to each panelist and concerns surrounding AI, including real-world examples.  

Klaich and Eimers pointed out how we are still in the experimental stages of these future technologies, which brings up scam issues and other challenges. Klaich used the example of AI agents operating in the blockchain world, creating their own wallets and acting as influencers to pump a token or even offer small amounts of money to others to promote the token. This behavior can manipulate the market, making it difficult to know what a real person is and what is not.

“The opportunities are sort of terrifying in that sense and endless,” Klaich said.

Eimers pointed to apps that create “ideal” AI-generated pictures and headshots, which can lead to mental health issues, especially in young women.  

“I think there are a lot of areas to be watched, which are awesome on the one hand but also bring your ethical struggles and challenges at the same time,” Eimers said.

Zhang revealed the biggest challenge to him is regulation. He used the analogy of a criminal escaping a scene by using a fast car, but the solution is not to ban cars; it’s to ensure that the police also have fast cars.  

“This leads to a generalized kind of conflict between technology advancement and regulation and then the misuse of technology,” he said.

“AI is being misused, and we need regulation to catch up and create corresponding technology to tackle the new technology. So if we just blindly ban technology, what he has is just police without cars,” Zhang pointed out.

Fortunately, Pietrzykoski, who was having technical difficulties throughout the panel, could chime in on this point.

“There’s a lot of unchartered ground right now, and I think a lot of people are trying to figure this out. There’s a lot of apprehension and a lot of fear. Either something clarifies what we need to do, or something bad is going to happen, and we’ll have to figure out a way to deal with it,” he said.

So, how can we deal with it before something bad happens? This is where blockchain and education come in.

Klaich talked about the importance of data privacy, ownership, and provenance and explained how we can hash information to the blockchain to trace where the data for the AI models is coming from.

Zhang, who is engaged in research around verifiable AI on the blockchain, said the key is striking a balance between data utilization and data privacy, the purpose of regulation whether it’s led by government or industry.

“What I’m trying to tackle for this is another angle, whether we can achieve something that can resolve the conflict between the two entirely so there is no conflict between data privacy and data utilization anymore. Of course this is too good to be true, but the seed is there and this is to utilize blockchain and cryptography,” he said.

The idea is to prove the input to the AI model, the execution of the AI model, and the output of the AI model are verifiable without compromising user privacy. There will be no giving away of AI model trade secrets, but we can be convinced that the model is acting as expected. 

Use cases in this scenario include certifying an AI model was trained on a particular data set, proving it is unbiased. Or, a data owner could collect royalties from an AI model when their data is used which also requires issuing a certificate.

The ability to do blockchain-based micropayments also opens up doors for users who are willing to share their personal data.  

“With AI and applications that run AI or are based on AI, we will have the choice of whether we interact with and potentially to be compensated or receive micropayments for releasing certain pieces of our personal information to these models,” Klaich explained. 

The role of education in preventing misuse of AI is an area Eimers is particularly passionate about, and she was delighted to elaborate on the subject. 

“Education really does play a big role and goes far beyond just doing some materials or quick checklists for customers,” she said.

Eimers revealed the details of Deutsche Telekom’s awareness campaigns, including “A message from Ella,” a deepfake of a child to raise awareness for parents on data privacy for their children.

“We do invest a lot of time and also resources to bring those messages into the mass market and use our reach in the best possible way. I think this is really, really important, and the responsibility and accountability, the thing a corporate has to do,” she said.

The question of who is responsible if AI goes wrong was also addressed, a hotly debated topic because we are still in unchartered territory. No one knows the answer just yet. However, Zhang is confident we cannot hold technology accountable and said shutting down tech is never a solution—remember the police who also need a fast car.

“Overall, I would say, in terms of accountability, the first technical approach is to have identities,” he said.

“In Web3 or any future generation of the internet, there is only one identity protocol…in this system, all your interactions can be held accountable and auditable. Using blockchain to manage this, you will be able to track and verify the entire history. Using cryptography, we can preserve user privacy and reveal it only if a crime is committed,” he said.

“We hope we can have a system or at least a tool to help identify misbehavior of AI models and, therefore, their corresponding developers or companies to hold them accountable,” Zhang added.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Blockchain & AI unlock possibilities

Recommended for you

Trump’s Cabinet pushes token prices to the moon…and Mars
Aiming to dismantle government bureaucracy, Donald Trump teams up with Elon Musk in setting up D.O.G.E., a new department seen...
November 14, 2024
Coinbase preps ‘crypto index’ derivative, denies token-listing fees
Coinbase is basking in a post-election glow, as its app leaped into the #1 spot in the App Store's finance...
November 13, 2024
Advertisement
Advertisement
Advertisement