Twitter is beginning to analyze the harmful effects of its algorithms

0

Twitter starts a new initiative, Responsible machine learning, to assess any “accidental damage” caused by its algorithms. A team of engineers, researchers and data scientists from across the company will study how Twitter’s use of machine learning can lead to algorithmic biases that negatively impact users.

One of the first tasks is an assessment of racial and gender bias in Twitter’s image cropping algorithm. Twitter users have pointed out that the auto-cropped photo previews prefer white faces over black. Last month, the company began testing full images instead of cropped samples.

The team will also look at how timeline recommendations differ between racial subgroups and analyze substantive recommendations about political ideologies in different countries. Twitter says it will “work closely” with outside academic researchers and share the results of its analyzes and solicit public feedback.

It is not clear how much impact the findings will have. Twitter says they “don’t always translate into visible product changes,” but simply lead to “increased awareness and important discussions” about how the company uses machine learning.

Twitter’s decision to analyze its own algorithms for bias follows other social networks such as Facebook, which formed similar teams in 2020. There is also ongoing pressure from lawmakers to keep companies’ algorithmic bias in check.

Twitter is also in the early stages of exploring “algorithmic choice,” which may allow people to get more input on what content is being served to them. CEO Jack Dorsey said in February that he envisions an “app store-like representation of ranking algorithms” from which people can choose which algorithms manage their feeds.