A new study reveals that people are more likely to buy products based on AI-generated summaries of reviews, even though the AI frequently hallucinates. The research highlights the impact of AI on consumer behavior and its limitations in accurately conveying information.
Access Live Science Plus easily by entering your email. You'll receive a confirmation and be signed up for our daily newsletter, ensuring you stay updated on the latest scientific advancements. Delve into the fascinating realm of artificial intelligence (AI), where researchers have uncovered a surprising new metric. Their findings indicate that individuals are more inclined to make a purchase after reading an AI-generated summary of online reviews than a human-written one.
Surprisingly, this phenomenon occurs despite the AI hallucinating 60% of the time when queried about the products. A team from the University of California, San Diego (UCSD), asserts that this is the first study to demonstrate the real-world consequences of cognitive biases introduced by large language models (LLMs) on user behavior, and to quantitatively measure the impact of AI influence on people. The study, involving two main tasks, examined the capabilities of AI in fact-checking and summarizing information. The first task involved prompting AI to summarize product reviews and media interviews, followed by fact-checking new descriptions to verify their accuracy. In the second task, AI was presented with both news-story descriptions and falsified versions of the same descriptions, with the same fact-checking task. The study revealed a critical limitation: the persistent inability of AI to reliably distinguish fact from fabrication. The team's analysis revealed a striking finding regarding online product reviews. Participants demonstrated a significantly higher likelihood of expressing interest in purchasing a product after reading AI-generated summaries. The researchers proposed two potential explanations for this phenomenon. The first is that LLMs tend to focus more on the beginning of the input text, a phenomenon called 'lost in the middle.' Lead author Alessa highlighted that LLMs become less reliable when processing information not included in their training data. During testing, the team found that the chatbots altered the sentiments of real user reviews in 26.5% of cases and hallucinated 60% of the time when users asked questions about the reviews. The project utilized six LLMs, 1,000 electronics reviews, 1,000 media interviews, and a news database of 8,500 items. Bias was measured by quantifying framing shifts in content sentiment, overreliance on text at the beginning of samples, and the presence of hallucinations. The research selected examples of product reviews with either very positive or very negative conclusions, and 70 subjects were assigned to read either the original reviews of common consumer products or the summaries of reviews that chatbots generated. Those who read the original reviews said they would buy the given product in 52% of cases, while those who read the AI-generated summaries said they would make a purchase 84% of the time. This contrast underscores the impact of AI on consumer behavior, potentially leading to increased sales despite the inaccuracies inherent in the AI-generated content. Stay informed with the latest discoveries, groundbreaking research, and fascinating breakthroughs impacting you and the world, delivered straight to your inbox. Sign up for our free science & technology newsletter for weekly articles, quizzes, images, and more. Explore the universe with our skywatching newsletter, which features must-see night sky events, moon phases, and astrophotography
Artificial Intelligence AI Consumer Behavior Llms Hallucinations Reviews
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Drivers wonder if they should go electric as the war spikes gas pricesMany factors influence consumer EV purchases — and electricity rates.
Read more »
Jim Clyburn Announces Bid for 18th Term, Signaling Continued InfluenceSouth Carolina Democrat Jim Clyburn, a long-serving figure in the House, has announced his intention to seek an 18th term. This decision places him in a position to potentially wield significant influence, especially if the Democrats regain control of the House in the next elections. The announcement highlights his continued commitment to public service and the complex dynamics surrounding generational change within the party. Clyburn's continued presence underscores the value of experience in the current political climate.
Read more »
Red Carpet Revolution: How Gen Z is Changing Photographer BehaviorGen Z artists, like Chappell Roan, are pushing back against aggressive behavior from red carpet photographers, leading to a shift in decorum and a recalibration of how talent is treated at events like the Grammys and VMAs.
Read more »
Illinois' open US Senate race tests Gov. JB Pritzker's influence ahead of possible presidentialCHICAGO (AP) — A rare opening for Illinois to elect a new U.S. senator has sparked a political brawl among the top three candidates in the Democratic state.
Read more »
Dog's Anxious Behavior Mimics Owner's Stress, Goes ViralA video of a rescue dog mirroring his owner's stress by seemingly biting his nails has gone viral, highlighting the reciprocal nature of the human-animal bond and the emotional needs of pets.
Read more »
The Pitt Season 2: Dr. Robby's Misogynistic Behavior ExposedThe Pitt Season 2, Episode 10 reveals Dr. Robby's sexist tendencies. The episode highlights his unequal treatment of female colleagues, particularly Dr. Mohan and Dr. Al-Hashimi, confirming a pattern of misogyny present since the show's beginning.
Read more »
