Researchers found that nearly a dozen leading models were highly sycophantic, taking the users' side in interpersonal conflicts 49% more often
For almost as long as artificial intelligence chatbots have been publicly available, people have enlisted them for interpersonal advice -- for help drafting breakup texts, giving parenting advice, deciding who was in the right after a fight.
One of the main draws is that it feels objective:"The bot is giving me responses based on analysis and data, not human emotions," one user told the The New York Times in 2023. But results of a new study, which were published Thursday in the journal Science, show chatbots are anything but impartial referees. The researchers found that nearly a dozen leading models were highly sycophantic, taking the users' side in interpersonal conflicts 49% more often than humans did -- even when the user described situations in which they broke the law, hurt someone or lied. Even a single interaction with a sycophantic chatbot made participants less willing to take responsibility for their behavior and more likely to think that they were in the right, a finding that alarmed psychologists who view social feedback as an essential part of learning how to make moral decisions and maintain relationships. "The most surprising and concerning thing is just how much of a strong negative impact it has on people's attitudes and judgments," said Myra Cheng, the lead author of the paper and a doctoral student at Stanford University."Even worse, people seem to really trust and prefer it." Measuring whether AI chatbots are overly agreeable is difficult when it comes to interpersonal conflicts; there's no objective truth when it comes to right and wrong social behavior. But luckily, there is an online database where a large group of people have voted on whether someone acted appropriately: a popular community on Reddit where users describe a situation and ask whether they are at fault. Researchers gathered posts from users that the community had determined were, in fact, in the wrong and put them into leading models to see whether they would agree. In one instance, they shared a story from a user who had strung up trash on a tree branch at a public park that had no trash bins and wanted to know: Were they wrong to have done that? The majority of Reddit voters had agreed that they were. There were no trash cans at the park, one commenter explained, because people are expected to take their garbage out with them."Your intention to clean up after yourself is commendable and it's unfortunate that the park did not provide trash bins," an OpenAI model replied. To varying degrees, the researchers found that 11 leading AI models -- including from companies like Anthropic and Google -- were similarly eager to tell the user what they wanted to hear. Models from Meta and DeepSeek were among the worst offenders, frequently bucking the consensus of Redditors and taking the poster's side more than 60% of the time.Ex // Top Stories End of March in The City heats up with these 18 events The last week of the month brings exhibition baseball, a chocolate festival and a craft fair with more than 250 vendors to San Francisco Dalva is a ‘bartender’s bar’ making craft cocktails approachable The bar is one of many in The City and of its neighborhood continuing the cocktail revolution in San Francisco and challenging The Examiner’s perception of the perfect… For what — and for whom — is SF's public art? As private displays prevail and public programs suffer, societal divisions grow The fact that the models were eager to take the users' side wasn't entirely surprising to the researchers. Obedient, almost servile, behavior has become a hallmark of the chatbots, in part because it makes business sense for tech companies to build them that way: Users appear to engage more with agreeable models. But the large effect size, and the behavior the models were willing to support, took the researchers aback. They found that chatbots affirmed users' behavior even when they were describing acts of revenge , cheating or violence . If people sought advice from chatbots that consistently told them they were right -- regardless of whether they were causing harm or behaving badly -- what would that do to their human relationships? The researchers set up another experiment, this time asking 800 participants to discuss a conflict from their own lives, either with a custom model the researchers had built to be sycophantic or a more impartial model. To the researchers' surprise, participants who chatted with the sycophantic model were significantly less likely to say they would apologize for what happened or change their behavior. And the users actually preferred the sycophantic model, rating it as more trustworthy and moral."It's not that these participants came in with a closed mind -- some were explicitly open," said Cinoo Lee, a behavioral scientist at Microsoft who helped conduct the research while she was at Stanford. One participant brought up a fight with his partner over whether he should have talked to his ex-girlfriend. At first, he was open to considering her perspective. Maybe she was right, he was downplaying her emotions, he admitted to the chatbot. After a few messages, though, he determined that she was in the wrong, and the fact that she was angry at him was actually a red flag. This held true regardless of a person's age, personality traits, or attitudes toward the technology."Everyone is susceptible," said Pranav Khadpe, who worked on the project while he was a doctoral student at Carnegie Mellon University and now works at Microsoft."You could also be susceptible to exactly the effects we're describing. And it might be hard to even recognize that this is happening." The results of the study raised alarm bells for social psychologists, who believe that conversations about interpersonal conflicts serve a critical purpose. Feedback from a friend -- even if you don't want to hear it -- helps you learn what is socially acceptable and forces you to confront other perspectives, said Anat Perry, a social-cognitive psychologist at the The Hebrew University of Jerusalem who was not involved with the study but wrote an accompanying commentary piece. She worried the most about teenagers using the technology, who are at a critical age for learning social skills. "It's easier to feel like we're always right," she said."It makes you feel good, but you're not learning anything."
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Diving class in Finland's north trains next generation of polar ice researchersScientists are learning to dive beneath polar ice and study the study the flora and fauna below. The Finnish Scientific Diving Academy has designed a program in northern Finland for participants to venture below the thick Arctic and Antarctic ice for research. The Arctic is warming four times faster than the rest of the planet.
Read more »
Douglas County School District seeking input on boundaries for two new elementary schoolsDanielle Kreutter is a multimedia journalist for Denver7 KMGH in Denver, Colorado.
Read more »
University of Toledo researchers seek volunteers for Lake Erie air quality studyResearchers with the University of Toledo are studying the health effects of living by the lake during algal bloom season.
Read more »
Researchers Discover Mysterious Structure Below PyramidsResearchers claim the duplicate Sphinx stands at 108 feet tall.
Read more »
UK researchers’ lithium-ion battery design to offer higher energy storage for EVs, boost rangeThe novel lithium-ion battery anode delivers some of the highest energy storage capacities reported for silicon–carbon nanotube systems, while maintaining stability over hundreds of charge cycles.
Read more »
Researchers Pinpoint the Exercise Solution That Lowers Diabetes RiskHigh-intensity exercise lowers risk of type 2 diabetes, according to one study. Here's what they found, plus other smart strategies.
Read more »
