11-21-2024
BSV
$72.66
Vol 158.56m
5.94%
BTC
$98019
Vol 107375.53m
4.5%
BCH
$511.77
Vol 1938.6m
15.55%
LTC
$90.7
Vol 1355.46m
4.96%
DOGE
$0.38
Vol 10244.87m
0.19%
Getting your Trinity Audio player ready...

As enterprises rush to integrate generative artificial intelligence (AI) into their operations to enhance efficiency and cut costs, consumers are increasingly losing trust that these companies will use the technology ethically, a new survey shows.

The sixth edition of Salesforce’s State of the Connected Customer report surveyed over 14,000 customers in over two dozen countries on AI and its application in the enterprise world. It reveals that while consumers believe the technology could enhance their experience, it has increased their distrust of the companies.

“As brands increasingly adopt AI to increase efficiency and meet increasing customer expectations, nearly three-quarters of their customers are concerned about unethical use of the technology,” the company found.

The survey is the latest to point at trust as the biggest challenge facing emerging technology. A February survey of 17,000 respondents found that 48% of respondents don’t trust AI with their work. Only 12% trusted AI to be responsible for at least 75% of managerial decisions.

Another survey in June found that over three-quarters of respondents don’t trust AI with decisions that directly affect them.

Blockchain technology is the solution to the trust gap in AI. With immutability and traceability being core tenets of the technology, AI misuse can be easily tracked down to an individual. Authorities can also actively monitor developments to ensure compliance, especially now when even the people who played a key role in AI’s development are warning about the risks.

Back to the survey, Salesforce found that consumers are becoming less open to AI integration. This year, only 51% of consumers want the technology used to improve their experiences, a dip from 65% last year.

The one thing consumers agree on is that companies need to be more trustworthy in their use of AI. Nearly 90% want the companies to disclose to them when they are communicating with AI, while 80% say humans should validate AI output.

“Ethical AI is a pressing concern for our customers and for our customers’ customers. Getting it right means creating AI with trust at the center of everything you do. That means gathering data with transparency and consent, training algorithms on diverse data sets, and never storing customer information insecurely,” commented Kathy Baxter, Salesforce’s head of responsible AI.

AI Summit PH 2023: Philippines is ripe to start using artificial intelligence

Recommended for you

Donald Trump’s role in the ‘crypto’ boom
Donald Trump pledged to make the United States the "crypto capital of the world." For the first time in nearly...
November 21, 2024
India Web3 space sees Trump influencing ‘crypto’ regulation
The Indian Web3 industry is celebrating Donald Trump's re-election, acknowledging that his pro-digital currency outlook could influence global sentiment and...
November 21, 2024
Advertisement
Advertisement
Advertisement