Fable, a popular social media app that describes itself as a haven for “bookworms and binge-watchers,” created an AI-powered year-end summary feature that summarizes the books users will read in 2024. .It was meant to be playful and fun, but some of the summaries took on a strangely combative tone. Writer Danny Groves’ summary, for example, asked him if he was “ever in the mood for the perspective of a cis straight white man” after labeling him a “diversity devotee.”
Meanwhile, book influencer Tiana Trammell’s summary ended with the following advice: “Don’t forget to look for the occasional white author, okay?”
Trammell was stunned and soon realized she wasn’t alone after sharing her experience with Fable recaps on Threads. “I received multiple messages,” he says, from people whose summaries made inappropriate comments about “disability and sexual orientation.”
Since the debut of Spotify Wrapped, year-in-review features have become ubiquitous across the internet, giving users a summary of how many books and news articles they’ve read, songs they’ve listened to, and workouts they’ve completed. Some companies are now using AI to produce or completely improve the way these metrics are presented. Spotify, for example, now offers an AI-generated podcast where robots analyze your listening history and make assumptions about your life based on your tastes. Fable joined the trend by using the OpenAI API to generate summaries of the last 12 months of its reading habits for its users, but it didn’t expect the AI model to spit out comments that took on the appearance of an anti-woke expert. .
Fable subsequently apologized on several social media channels, including Threads and Instagram, where posted a video of an executive issuing the mea culpa. “We are deeply sorry for the damage caused by some of our reader summaries this week,” the company wrote in the caption. “We will do better.”
Kimberly Marsh Allee, head of community at Fable, told WIRED that the company is working on a number of changes to improve its AI summaries, including an opt-out option for people who don’t want them and clearer disclosures indicating which are AI. generated. “For the moment we have removed the part of the model that jokes with the reader and, instead, the model simply summarizes the user’s taste in books,” he says.
For some users, tweaking the AI doesn’t seem like an appropriate response. Fantasy and romance writer AR Kaufer was horrified when she saw screenshots of some of the recaps on social media. “They need to say they are completely eliminating AI. And they have to issue a statement, not only about AI, but also with an apology to those affected,” says Kaufer. “This ‘apology’ on Threads seems disingenuous, mentioning the app being ‘playful’ as if it’s somehow excusing racist/sexist/empowering quotes.” In response to the incident, Kaufer decided to delete his Fable account.
Trammell did the same. “The appropriate course of action would be to disable the feature and conduct rigorous internal testing, incorporating newly implemented safeguards to ensure, to the best of your ability, that no other users of the platform are exposed to harm,” it says.