Fable's AI Book Summaries Sparked Outrage for Racist and Sexist Remarks

Technology News

Fable's AI Book Summaries Sparked Outrage for Racist and Sexist Remarks
AIFableBias
  • 📰 WIRED
  • ⏱ Reading Time:
  • 188 sec. here
  • 12 min. at publisher
  • 📊 Quality Score:
  • News: 103%
  • Publisher: 51%

Fable, a reading app, faced backlash after its AI-powered book summary feature generated offensive and biased content. The summaries, which used OpenAI's API, made inappropriate remarks about users' reading preferences related to race, gender, and disability. Fable has since apologized and removed the AI-generated summaries, but the incident highlights the potential dangers of unchecked AI bias.

Books influencer Tiana Trammell’s summary, meanwhile, ended with the following advice: “Don’t forget to surface for the occasional white author, okay?” Trammell was flabbergasted, and she soon realized she wasn’t alone after sharing her experience with Fable ’s summaries on Threads. “I received multiple messages,” she says, from people whose summaries had inappropriately commented on “disability and sexual orientation.

” Ever since the debut of Spotify Wrapped, annual recap features have become ubiquitous across the internet, providing users a rundown of how many books and news articles they read, songs they listened to, and workouts they completed. Some companies are now using AI to wholly produce or augment how these metrics are presented. Spotify, for example, now offers an AI-generated podcast where robots analyze your listening history and make guesses about your life based on your tastes. Fable hopped on the trend by using OpenAI’s API to generate summaries of the past 12 months of the reading habits for its users, but it didn’t expect that the AI model would spit out commentary that took on the mien of an anti-woke pundit. Fable later apologized on several social media channels, including Threads and Instagram, where it posted a video of an executive issuing the mea culpa. “We are deeply sorry for the hurt caused by some of our Reader Summaries this week,” the company wrote in the caption. “We will do better.” Kimberly Marsh Allee, Fable’s head of community, told WIRED before publication that the company was working on a series of changes to improve its AI summaries, including an opt-out option for people who don’t want them and clearer disclosures indicating that they’re AI-generated. “For the time being, we have removed the part of the model that playfully roasts the reader, and instead the model simply summarizes the user’s taste in books,” she said. After publication, Marsh Allee said that Fable had instead made the decision to immediately remove the AI-generated 2024 reading summaries, as well as two other features that used AI. For some users, adjusting the AI does not feel like an adequate response. Fantasy and romance writer A.R. Kaufer was aghast when she saw screenshots of some of the summaries on social media. “They need to say they are doing away with the AI completely. And they need to issue a statement, not only about the AI, but with an apology to those affected,” says Kaufer. “This ‘apology’ on Threads comes across as insincere, mentioning the app is ‘playful’ as though it somehow excuses the racist/sexist/ableist quotes.” In response to the incident, Kaufer decided to delete her Fable account. So did Trammell. “The appropriate course of action would be to disable the feature and conduct rigorous internal testing, incorporating newly implemented safeguards to ensure, to the best of their abilities, that no further platform users are exposed to harm,” she says. Groves concurs. “If individualized reader summaries aren't sustainable because the team is small, I'd rather be without them than confronted with unchecked AI outputs that might offend with testy language or slurs,” he says. “That's my two cents … assuming Fable is in the mood for a gay, cis Black man's perspective.” Generative AI tools already have a lengthy track record of race-related misfires. In 2022, researchers found that OpenAI’s image generator Dall-E had a bad habit of showing nonwhite people when asked to depict “prisoners” and all white people when it showed “CEOs.” Last fall, WIRED reported that a variety of AI search engines surfaced debunked and racist theories about how white people are genetically superior to other races. Overcorrecting has sometimes become an issue, too: Google’s Gemini was roundly criticized last year when it repeatedly depicted World War II–era Nazis as people of color in a misguided bid for inclusivity. “When I saw confirmation that it was generative AI making those summaries, I wasn't surprised,” Groves says. “These algorithms are built by programmers who live in a biased society, so of course the machine learning will carry the biases, too—whether conscious or unconscious.

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

WIRED /  🏆 555. in US

AI Fable Bias Discrimination Openai Book Summaries Generative AI

United States Latest News, United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

Fable Book App Faces Backlash Over Offensive AI-Generated SummariesFable Book App Faces Backlash Over Offensive AI-Generated SummariesFable, a popular book app, has apologized for its AI-generated annual roundups that some users found offensive due to racist, sexist, and ableist remarks. The app attempted to use AI to 'playfully roast' its readers, but the results veered into inappropriate territory. Users shared screenshots of summaries that made offensive comments about their race, gender, and disability. Fable has temporarily removed the 'playful roast' feature and is working on revising its AI model.
Read more »

Fable's AI-Generated Year-End Summaries Spark Backlash for Inappropriate and Combative ToneFable's AI-Generated Year-End Summaries Spark Backlash for Inappropriate and Combative ToneSocial media app Fable's new AI-powered end-of-year summary feature, intended to be fun and playful, backfired after generating summaries that took on an oddly combative and sometimes inappropriate tone. Some summaries made comments about users' reading habits that focused on their race, gender, and sexual orientation, prompting criticism and apologies from the company.
Read more »

Fable's AI Reading Summaries Spark Outrage Over Biased CommentaryFable's AI Reading Summaries Spark Outrage Over Biased CommentaryFable, a book discovery app, faced backlash after its AI-generated reading summaries produced offensive and biased content, prompting users to delete their accounts and calls for the company to abandon the technology.
Read more »

Apple to Clarify AI-Generated Notification Summaries After InaccuraciesApple to Clarify AI-Generated Notification Summaries After InaccuraciesApple is addressing concerns about inaccurate summaries provided by its AI-powered News notification feature. The feature, designed to condense news stories, has displayed misleading information, prompting calls for its disablement. An upcoming software update will introduce clearer visual cues to distinguish AI-generated summaries from original notifications.
Read more »

Apple to Address Issues with Notification Summaries in Upcoming iOS UpdateApple to Address Issues with Notification Summaries in Upcoming iOS UpdateApple acknowledges concerns regarding inaccuracies in its notification summaries feature and promises a software update to improve clarity.
Read more »

Apple addresses concerns over Apple Intelligence notification summariesApple addresses concerns over Apple Intelligence notification summariesIzzy, a tech enthusiast and a key part of the PhoneArena team, specializes in delivering the latest mobile tech news and finding the best tech deals. Her interests extend to cybersecurity, phone design innovations, and camera capabilities.
Read more »



Render Time: 2025-02-15 12:39:52