BSV
$80.4
Vol 65.36m
-0.91%
BTC
$99669
Vol 107751.34m
1.76%
BCH
$616.11
Vol 977.4m
3.35%
LTC
$136.53
Vol 1446.33m
-0.12%
DOGE
$0.43
Vol 6774.11m
-0.06%
Getting your Trinity Audio player ready...

As enterprises rush to integrate generative artificial intelligence (AI) into their operations to enhance efficiency and cut costs, consumers are increasingly losing trust that these companies will use the technology ethically, a new survey shows.

The sixth edition of Salesforce’s State of the Connected Customer report surveyed over 14,000 customers in over two dozen countries on AI and its application in the enterprise world. It reveals that while consumers believe the technology could enhance their experience, it has increased their distrust of the companies.

“As brands increasingly adopt AI to increase efficiency and meet increasing customer expectations, nearly three-quarters of their customers are concerned about unethical use of the technology,” the company found.

The survey is the latest to point at trust as the biggest challenge facing emerging technology. A February survey of 17,000 respondents found that 48% of respondents don’t trust AI with their work. Only 12% trusted AI to be responsible for at least 75% of managerial decisions.

Another survey in June found that over three-quarters of respondents don’t trust AI with decisions that directly affect them.

Blockchain technology is the solution to the trust gap in AI. With immutability and traceability being core tenets of the technology, AI misuse can be easily tracked down to an individual. Authorities can also actively monitor developments to ensure compliance, especially now when even the people who played a key role in AI’s development are warning about the risks.

Back to the survey, Salesforce found that consumers are becoming less open to AI integration. This year, only 51% of consumers want the technology used to improve their experiences, a dip from 65% last year.

The one thing consumers agree on is that companies need to be more trustworthy in their use of AI. Nearly 90% want the companies to disclose to them when they are communicating with AI, while 80% say humans should validate AI output.

“Ethical AI is a pressing concern for our customers and for our customers’ customers. Getting it right means creating AI with trust at the center of everything you do. That means gathering data with transparency and consent, training algorithms on diverse data sets, and never storing customer information insecurely,” commented Kathy Baxter, Salesforce’s head of responsible AI.

AI Summit PH 2023: Philippines is ripe to start using artificial intelligence

Recommended for you

South Africa denies de-dollarization with BRICS
Donald Trump has threatened to slap BRICS members with 100% tariffs on goods imported to the U.S. if they create...
December 6, 2024
Tether sweats as Celsius’s Alex Mashinsky pleads guilty to fraud
Celsius’s Alex Mashinsky pleaded guilty to one count of committing commodities fraud and one count of committing securities fraud—the fraud...
December 6, 2024
Advertisement
Advertisement
Advertisement