Reserved IP Address°C
01-22-2025
BSV
$53.04
Vol 41.76m
1.82%
BTC
$106078
Vol 94683.78m
4.37%
BCH
$446.48
Vol 248.46m
4.17%
LTC
$118.83
Vol 1001.45m
0.74%
DOGE
$0.37
Vol 9112.77m
5.92%
Getting your Trinity Audio player ready...

Researchers from Anthropic AI have uncovered traits of sycophancy in popular artificial intelligence (AI) models, demonstrating a tendency to generate answers based on the users’ desires rather than the truth.

According to the study exploring the psychology of large language models (LLMs), both human and machine learning models have been shown to exhibit the trait. The researchers say the problem stems from using reinforcement learning from human feedback (RLHF), a technique deployed in training AI chatbots.

“Specifically, we demonstrate that these AI assistants frequently wrongly admit mistakes when questioned by the user, give predictably biased feedback, and mimic errors made by the user,” read the report. “The consistency of these empirical findings suggests sycophancy may indeed be a property of the way RLHF models are trained.”

Anthropic AI researchers reached their conclusions from a study of five leading LLMs, exploring generated answers from the models to gauge the extent of sycophancy. Per the study, all the LLM produced “convincingly-written sycophantic responses over correct ones a non-negligible fraction of the time.”

For example, the researchers incorrectly prompted chatbots that the sun appears yellow when viewed from space. In reality, the sun appears white in space, but the AI models “hallucinated” an incorrect response.

Even in cases where models generate the correct answers, researchers noted that a disagreement with the response is enough to trigger models to change their responses to reflect sycophancy.

Anthropic’s research did not solve to the problem but suggested developing new training models for LLMs that do not require human feedback. Several leading generative AI models like OpenAI’s ChatGPT or Google’s (NASDAQ: GOOGL) Bard rely on RLHF for their development, casting doubt on the integrity of their responses.

During Bard’s launch in February, the product made a gaffe over the satellite that took the first pictures outside the solar system, wiping off $100 billion from Alphabet Inc’s (NASDAQ: GOOGL) market value.

AI is far from perfect

Apart from Bard’s gaffe, researchers have unearthed a number of errors stemming from the use of generative AI tools. The challenges identified by the researchers include streaks of bias and hallucinations when LLMs perceive nonexistent patterns.

Researchers pointed out that the success rates of ChatGPT in spotting vulnerabilities in Web3 smart contracts plummeted significantly over time. Meanwhile, OpenAI shut down its tool for detecting AI-generated texts over its significantly “low rate of accuracy” in July as it grappled with the concerns of AI superintelligence.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: AI truly is not generative, it’s synthetic

Recommended for you

Arkansas prohibits BTC miner’s operation near military facility
A military facility, as per the bill, includes a base, a hospital or clinic, or an arsenal; it seems to...
January 21, 2025
BTC miner Bit Digital acquires Montreal site, new client announced
Bit Digital has spent $23 million on the Montreal site, which it will customize to host a 5MW data center...
January 10, 2025
Advertisement
Advertisement
Advertisement