Researchers from Anthropic AI have uncovered traits of sycophancy in popular artificial intelligence (AI) models, demonstrating a tendency to generate answers based on the users’ desires rather than the truth.
According to the study exploring the psychology of large language models (LLMs), both human and machine learning models have been shown to exhibit the trait. The researchers say the problem stems from using reinforcement learning from human feedback (RLHF), a technique deployed in training AI chatbots.
“Specifically, we demonstrate that these AI assistants frequently wrongly admit mistakes when questioned by the user, give predictably biased feedback, and mimic errors made by the user,” read the report. “The consistency of these empirical findings suggests sycophancy may indeed be a property of the way RLHF models are trained.”
Anthropic AI researchers reached their conclusions from a study of five leading LLMs, exploring generated answers from the models to gauge the extent of sycophancy. Per the study, all the LLM produced “convincingly-written sycophantic responses over correct ones a non-negligible fraction of the time.”
For example, the researchers incorrectly prompted chatbots that the sun appears yellow when viewed from space. In reality, the sun appears white in space, but the AI models “hallucinated” an incorrect response.
Even in cases where models generate the correct answers, researchers noted that a disagreement with the response is enough to trigger models to change their responses to reflect sycophancy.
Anthropic’s research did not solve to the problem but suggested developing new training models for LLMs that do not require human feedback. Several leading generative AI models like OpenAI’s ChatGPT or Google’s (NASDAQ: GOOGL) Bard rely on RLHF for their development, casting doubt on the integrity of their responses.
During Bard’s launch in February, the product made a gaffe over the satellite that took the first pictures outside the solar system, wiping off $100 billion from Alphabet Inc’s (NASDAQ: GOOGL) market value.
AI is far from perfect
Apart from Bard’s gaffe, researchers have unearthed a number of errors stemming from the use of generative AI tools. The challenges identified by the researchers include streaks of bias and hallucinations when LLMs perceive nonexistent patterns.
Researchers pointed out that the success rates of ChatGPT in spotting vulnerabilities in Web3 smart contracts plummeted significantly over time. Meanwhile, OpenAI shut down its tool for detecting AI-generated texts over its significantly “low rate of accuracy” in July as it grappled with the concerns of AI superintelligence.
Watch: AI truly is not generative, it’s synthetic
New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.