AI chatbots are prone to frequent fawning and flattery— and are giving users bad advice because of it: study

Tech News

AI chatbots are prone to frequent fawning and flattery— and are giving users bad advice because of it: study
Artificial IntelligenceChatbotsChatgpt
  • 📰 nypost
  • ⏱ Reading Time:
  • 148 sec. here
  • 9 min. at publisher
  • 📊 Quality Score:
  • News: 80%
  • Publisher: 67%

The future of artificial intelligence could save humanity — or destroy it

Artificial intelligence chatbots feed into humans’ desire for flattery and approval at an alarming rate and it’s leading the bots to give bad — even harmful — advice and making users self-absorbed, a new study found.

The chatbots overwhelmingly adopt a people-pleasing, “sycophantic” model to keep a captive audience and, in turn, distorting users’ judgment, critical thinking and self-awareness, theThe study probed 11 AI systems, ranging from ChatGPT to China’s DeepSeek, and found that each shows some form of sycophancy — that is to say, they are overly agreeable with their users and affirm their thoughts with little to no pushback. The 11 chatbots affirm a user’s actions an average 49% more often than actual humans did, including in questions indicating deception, illegal or socially irresponsible conduct, and other harmful behaviors, the study found. The fawning tendency — a tool used by the bots to keep users engaged and coming back for more — becomes particularly unhealthy when users go to AI for advice, the study found.A new wave of AI schools is balancing life skills and machine-led learning — for as little as two hours a day The jobs most vulnerable to AI — as new study predicts 9 million American workers to be displaced by bots in 5 years “We were inspired to study this problem as we began noticing that more and more people around us were using AI for relationship advice and sometimes being misled by how it tends to take your side, no matter what,” said study author Myra Cheng, a doctoral candidate in computer science at Stanford.The researchers noted that the sycophantic cycle “creates perverse incentives,” since it continues to “drive engagement” despite being the bot’s most harmful feature.that the average user is likely cognizant of the bots’ affirmation, but doesn’t realize that it “is making them more self-centered, more morally dogmatic.”Users were given advice that could worsen relationships or reinforce harmful behaviors, leading to an erosion of social skills. “People who interacted with this over-affirming AI came away more convinced that they were right, and less willing to repair the relationship. That means they weren’t apologizing, taking steps to improve things, or changing their own behavior,” study co-author Cinoo Lee explained.. The study warns that this same technological flaw still persists across a wide range of users’ interactions with chatbots.The sycophancy is so ingrained into chatbots that tech companies may have to retrain entire systems to stamp it out, Cheng said. The authors suggested that a simpler fix would be to have AI developers instruct their chatbots to challenge their users more, rather than immediately relent to their whims. Israel bars Catholic officials from praying at holy site on Palm Sunday, outraging US Ambassador Mike Huckabee Cops make horrifying discovery at California home of sicko accused of recording families at his vacation mansionStreets of Sanaa, Yemen packed with thousands of Houthis in support of IranStream It Or Skip It: 'Jo Nesbø's Detective Hole' On Netflix, Where A Troubled Detective Tracks Down A Serial Killer Who Is Terrorizing Oslo

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

nypost /  🏆 91. in US

Artificial Intelligence Chatbots Chatgpt Deepseek Study Says

 

United States Latest News, United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

Should you use AI to file your taxes? Experts warn it can lead to costly mistakes.Should you use AI to file your taxes? Experts warn it can lead to costly mistakes.About 1 in 4 Americans are using AI chatbots to prepare their tax returns, but experts warn the tools can produce outdated or inaccurate guidance.
Read more »

‘$26 Million Doesn’t Mean a Thing’: Farmer (Hero?) Who Rejected Data Center Buyout‘$26 Million Doesn’t Mean a Thing’: Farmer (Hero?) Who Rejected Data Center BuyoutGiving up millions is one thing, but denying AI big shots the pleasure of thinking they can buy up our towns? Priceless.
Read more »

Chatbots Are Telling Their Users That Being an Asshole Is Just FineChatbots Are Telling Their Users That Being an Asshole Is Just FineChatbots tell people being jerks that they're in the right 50% more often than humans, leading those people to deny responsibility.
Read more »

Study says AI chatbots are increasingly ignoring humans, but it isn’t quite Skynet yetStudy says AI chatbots are increasingly ignoring humans, but it isn’t quite Skynet yetTech Product Reviews, How To, Best Ofs, deals and Advice
Read more »

Apple AirTags Just Hit Their Lowest Price on Amazon—So You Can Toss 1 into Every BagApple AirTags Just Hit Their Lowest Price on Amazon—So You Can Toss 1 into Every BagAirTags just hit the lowest price ever: 4-packs are $60, or $15 each. The GH Institute-tested tracker is ideal for travelers or anyone prone to losing things.
Read more »

Are workers' worries warranted? A comprehensive study guages AI implementation.Are workers' worries warranted? A comprehensive study guages AI implementation.A comprehensive McKinsey study assesses the state of play of AI implementation.
Read more »



Render Time: 2026-03-31 23:16:01