What Instagram really learned from hiding counts

0

In April 2019, amid growing questions about social networking’s effects on mental health, Instagram announced it would be testing a feed with no likes. The person who posted an image on the network would still see how many people sent it a heart, but the total number of hearts would remain invisible to the public.

“It’s about young people,” said Instagram chief Adam Mosseri that Novemberjust before the test arrives in the United States. “The idea is to try to make Instagram pressureless, make it less of a competition, give people more space to focus on connecting with people they love, things that inspire them. But it is really aimed at young people. “

After more than two years of testing, Instagram today announced what it found: Removing likes doesn’t seem to lower Instagram meaningfully, for young people or anyone else, and so likes will remain publicly visible by default. But all users will now have the option to disable them if they wish, for their entire feed or by post.

“What we heard from people and experts was that it wasn’t good to see counts for some and annoying for others, especially as people use ‘like counts’ to get a sense of what’s trending or popular, so we give you the choice, ”the company said a blog post.

At first glance, this movement feels like a remarkable anti-climax. The company invested more than two years in testing these changes, Mosseri said Wired he spent “a lot of time on this personally” when the company started the project. For a moment, it seemed like Instagram was on the brink of a fundamental transformation – away from an influencer-powered social media reality show to something more intimate and human.

In 2019, this non-public metrics, friends-first approach was perfected by Instagram’s forever rival, Snapchat. And the idea of ​​pulling out likes, views, followers, and other popularity scoreboards gained traction in some quarters – artist Ben Grosser’s Demetricator project made a series of tools who implemented the idea through browser extensions, to positive reviews.

So what happened on Instagram?

“It turned out that it didn’t actually change that much about … how people felt, or how much they used the experience when we thought it would happen,” Mosseri said in a briefing with reporters this week. But it eventually got pretty polarizing. Some people really liked it, and some people really didn’t. “

On that last point, he added, “You can check out some of my @mentions on Twitter.”

While Instagram ran its tests, a growing body of research found only limited evidence linking smartphone or social network use to changes in mental health, The New York Times reported last year. Just this month, a 30-year study of teens and technology at the University of Oxford reached a similar finding.

Note that this does not say that social networks are necessary good for teenagers or anyone else. Just that they don’t move the needle very much for mental health. Assuming that’s true, it makes sense that changes to the user interface of individual apps would also have limited effect.

At the same time, I wouldn’t consider this experiment a failure. Rather, I think it brings up a lesson that social networks are often too reluctant to learn: Rigid, one-size-fits-all platform policies make people miserable.

Consider, for example, the vocal minority of Instagram users who want to view their feed chronologically. Or the Facebook users who want to pay to disable ads. Or look at all the impossible questions related to speech that are decided at the platform level, when they can be better resolved on a personal level.

Last month, Intel was roasted online after showing Bleep, an experimental AI tool for censoring voice chat during multiplayer online video games. If you’ve ever played an online shooter, chances are you haven’t spent a full afternoon without being exposed to a barrage of racist, misogynist, and homophobic speech. (Usually from a 12-year-old.) Rather than censor everything, Intel said it would put the choice in the users’ hands. Here is Ana Diaz at Polygon:

The screenshot shows the user settings for the software and shows a sliding scale on which people can choose between “none, some, most or all” categories of hate speech, such as “racism and xenophobia” or “misogyny”. There is also a switch for the N word.

A switch for ‘all racism’ understandably upsets us, even if hearing racism is currently the standard for most in-game chats and the screenshot has yielded many valuable results memes and jokes. Intel explained that it built such settings to account for the fact that people accept language from friends, but not from strangers.

But the basic idea of ​​speech difficulty sliders is good I guess. Some issues, particularly related to non-sexual nudity, vary so widely across cultures that it seems ridiculous to impose one global standard on them – as is the norm today. Letting users build their own experience, from whether their similarities are visible to whether pictures of breastfeeding appear in their feed, feels like the clear solution.

There are some obvious limits here. Tech platforms cannot ask users to make an unlimited number of decisions because it brings too much complexity into the product. Businesses will still have to draw hard lines around tough issues, including hate speech and misinformation. And introducing choices doesn’t change the fact that, as with all software, most people just stick with the default settings.

That said, extensive user choice is clearly in the interest of both people and platforms. People can get software that better suits their cultures and preferences. And platforms can convey a series of impossible-to-solve riddles from their policy teams to an enthusiastic user base.

There are already signs outside of today that this future is coming. Reddit gave us an early look with its policy of establishing a hard “floor” of rules for the platform, while individual subreddits were able to raise the “ceiling” by introducing additional rules. Twitter CEO Jack Dorsey has predicted a world where users can choose from a variety of feed ranking algorithms.

With his decision on likes, Mosseri is moving in the same direction.

“It eventually became that the clearest path forward was something we already believe in that gives people choice,” he said this week. “I think it’s something we need to do more of.”


This column is in conjunction with Platform game, a daily newsletter about Big Tech and Democracy.