Fable, a book discovery app, faced backlash after its AI-generated reading summaries produced offensive and biased content, prompting users to delete their accounts and calls for the company to abandon the technology.
Books influencer Tiana Trammell's summary, meanwhile, ended with the following advice: “Don’t forget to surface for the occasional white author, okay?” Trammell was flabbergasted, and she soon realized she wasn’t alone after sharing her experience with Fable ’s summaries on Threads. “I received multiple messages,” she says, from people whose summaries had inappropriately commented on “disability and sexual orientation.
” Ever since the debut of Spotify Wrapped, annual recap features have become ubiquitous across the internet, providing users a rundown of how many books and news articles they read, songs they listened to, and workouts they completed. Some companies are now using AI to wholly produce or augment how these metrics are presented. Spotify, for example, now offers an AI-generated podcast where robots analyze your listening history and make guesses about your life based on your tastes. Fable hopped on the trend by using OpenAI’s API to generate summaries of the past 12 months of the reading habits for its users, but it didn’t expect that the AI model would spit out commentary that took on the mien of an anti-woke pundit. Fable later apologized on several social media channels, including Threads and Instagram, where it posted a video of an executive issuing the mea culpa. “We are deeply sorry for the hurt caused by some of our Reader Summaries this week,” the company wrote in the caption. “We will do better.” Kimberly Marsh Allee, Fable’s head of community, told WIRED before publication that the company was working on a series of changes to improve its AI summaries, including an opt-out option for people who don’t want them and clearer disclosures indicating that they’re AI-generated. “For the time being, we have removed the part of the model that playfully roasts the reader, and instead the model simply summarizes the user’s taste in books,” she said. After publication, Marsh Allee said that Fable had instead made the decision to immediately remove the AI-generated 2024 reading summaries, as well as two other features that used AI. For some users, adjusting the AI does not feel like an adequate response. Fantasy and romance writer A.R. Kaufer was aghast when she saw screenshots of some of the summaries on social media. “They need to say they are doing away with the AI completely. And they need to issue a statement, not only about the AI, but with an apology to those affected,” says Kaufer. “This ‘apology’ on Threads comes across as insincere, mentioning the app is ‘playful’ as though it somehow excuses the racist/sexist/ableist quotes.” In response to the incident, Kaufer decided to delete her Fable account. So did Trammell. “The appropriate course of action would be to disable the feature and conduct rigorous internal testing, incorporating newly implemented safeguards to ensure, to the best of their abilities, that no further platform users are exposed to harm,” she says. Groves concurs. “If individualized reader summaries aren't sustainable because the team is small, I'd rather be without them than confronted with unchecked AI outputs that might offend with testy language or slurs,” he says. “That's my two cents … assuming Fable is in the mood for a gay, cis Black man's perspective.” Generative AI tools already have a lengthy track record of race-related misfires. In 2022, researchers found that OpenAI’s image generator Dall-E had a bad habit of showing nonwhite people when asked to depict “prisoners” and all white people when it showed “CEOs.” Last fall, WIRED reported that a variety of AI search engines surfaced debunked and racist theories about how white people are genetically superior to other races. Overcorrecting has sometimes become an issue, too: Google’s Gemini was roundly criticized last year when it repeatedly depicted World War II–era Nazis as people of color in a misguided bid for inclusivity. “When I saw confirmation that it was generative AI making those summaries, I wasn't surprised,” Groves says. “These algorithms are built by programmers who live in a biased society, so of course the machine learning will carry the biases, too—whether conscious or unconscious.
Artificial Intelligence Bias Fable Social Media AI Ethics
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Fable's AI-Generated Year-End Summaries Spark Backlash for Inappropriate and Combative ToneSocial media app Fable's new AI-powered end-of-year summary feature, intended to be fun and playful, backfired after generating summaries that took on an oddly combative and sometimes inappropriate tone. Some summaries made comments about users' reading habits that focused on their race, gender, and sexual orientation, prompting criticism and apologies from the company.
Read more »
Fable Book App Faces Backlash Over Offensive AI-Generated SummariesFable, a popular book app, has apologized for its AI-generated annual roundups that some users found offensive due to racist, sexist, and ableist remarks. The app attempted to use AI to 'playfully roast' its readers, but the results veered into inappropriate territory. Users shared screenshots of summaries that made offensive comments about their race, gender, and disability. Fable has temporarily removed the 'playful roast' feature and is working on revising its AI model.
Read more »
Apple to Improve Transparency of AI-Generated Notification SummariesApple is releasing a software update to clarify when notification summaries are created by its AI, Apple Intelligence, following reports of inaccurate and misleading summaries.
Read more »
Apple to Clarify AI-Generated Summaries in iPhone NotificationsApple will make changes to how iPhones and other devices display Apple Intelligence-summarized notifications to better indicate when AI has altered the original text. This comes after the BBC raised concerns about inaccuracies in summaries, including falsely stating that the outlet reported on the UHC shooting suspect shooting himself and incorrectly predicting the winner of the PDC World Darts Championship. Apple says a software update will clarify when summaries are being used.
Read more »
Apple to Clarify AI-Generated Notification SummariesApple is set to implement changes to its 'Apple Intelligence' feature, which summarizes notifications on iPhones and other devices. Following criticism from the BBC regarding inaccurate summarizations, Apple will update its software to better indicate when AI has altered the original text.
Read more »
Apple to Clarify AI-Generated Notification Summaries After InaccuraciesApple is addressing concerns about inaccurate summaries provided by its AI-powered News notification feature. The feature, designed to condense news stories, has displayed misleading information, prompting calls for its disablement. An upcoming software update will introduce clearer visual cues to distinguish AI-generated summaries from original notifications.
Read more »