Advertisements
The Senate's secret algorithms bill does not actually fight secret algorithms

Politicians sometimes exaggerate the laws they're proposing. But when they start making up new sections or a bill from a whole cloth, something has gone wrong. In the case of the Filter Transparency Act, it's not just spin; it’s an example or how badly defined buzzwords can make it impossible to address the internet's problems.

Advertisements

The Filter Bubble Transparency Act (FBTA) is sponsored by some of the Senate's most prominent tech industry critics, including Sens. Mark Warner (D-VA) and John Thune (R-SD). Introduced last week, the bill is named after Eli Pariser's 2011 book The Filter Bubble, which argues that companies like Facebook create digital echo chambers by optimizing content for what each person already engages with.

The FBTA aims to let people opt out of those echo chambers. Large companies would have to notify users if they 're delivering content – like search results or a news feed – based on personal information that the user did not explicitly provide. That could include a user's search history, their location, or information about their device. Sites would also require users to turn off this personalization, although the rules do not apply to user-supplied data such as search terms, saved preferences, or an explicitly entered geographical location.

The rules are supposed to offer users more options and make them more aware of how web platforms work. A spokesperson for Warner offered one example: if you look for "pizza delivery" on Google search, you'll normally get results for nearby businesses based on your location data, a kind of personalization that the bill refers to as an "opaque algorithm. Under the proposed rules, Google would need to provide a generic version that didn't rely on that data, which the bill calls an "input-transparent algorithm."

Limiting personalization sounds like a straightforward goal, but the FBTA sponsors have made it surprisingly hard to understand, starting with the term "opaque algorithm." sort or makes sense in context. An algorithm (a word that broadly refers to flowchart-style rule sets) is considered opaque if it uses a certain child or data that some people don't realize they provide. It's considered transparent if it doesn't.

On a larger scale, though, these terms are so misleading that even the bill's sponsors can't keep things straight. The FBTA does not make make platforms explain exactly how their algorithms work. It does not prevent them from using arcane and manipulative rules, as long as those rules aren't built around certain kind of personal data. And removing or disclosing a few factors in an algorithm doesn't make the overall algorithm transparent. This bill is not aimed at systems like the "black box" algorithms used in criminal sentencing, for example, where transparency is a key issue.

Despite this, a press release multiple claims the bill fights “secret algorithms” rather than micro-targeting or invasive data mining. Here's a supposed summary of the FBTAs rules:

Clearly notify (big web platform) users that their platform creates a filter bubble that uses secret algorithms (computer-generated filters) to determine the order or manner in which information is delivered to users; and

Provide (big web platform) users with the option of a filter bubble-free view of the information they provide. The bill would enable users to transition between a customized, bubble-generated version of information and a non-filter bubble version (for example, the “sparkle icon” option that is currently offered by Twitter that allows users to toggle between a personalized timeline and a purely chronological timeline).

Advertisements

If you've read the bill, this is baffling. For one thing, virtually all big recommendation and search systems are "secret algorithms" on some level, and the bill doesn't ask companies to disclose their code or rule sets. For another, Twitter's "sparkle icon" doesn't just remove personalization; it removes algorithmic sorting in general. Sen. Marsha Blackburn (R-TN), another sponsor, explicitly claims this is part of the FBTA:

"When individuals log onto a website, they are not expecting the platform to have chosen for them what information is most important," said Senator Blackburn. “Algorithms directly influence what content users see first, in turn shaping their worldview. This legislation would give consumers the choice to decide whether they want to use the algorithm or view content in the order it was posted. ”

That's just not true. Sen. Thune did float the idea or an "algorithm-free" Facebook and Twitter this summer. But this bill never mentions viewing content "in the order it was posted" – a fact I confirmed with Warner's office. (Blackburn’s office did not return a request for clarification.)

This confusion has been carried over into press coverage of the bill. The Wall Street Journal says the FBTA would “require big online search engines and platforms to disclose that they are using algorithms to sort the information that users are requesting or are being encouraged to view.” Again, nothing in this bill requires companies to disclose the use of algorithms. They just have to be disclose when those algorithms use personal information for customized results. And that makes sense because algorithms are a basic building block or web services. Search engines couldn't exist without them.

The FBTA’s sponsors are using "algorithm" to mean "sorting program" and “Bad, manipulative social media recommendation tool” and "Social media personalization system."

It’s not clear whether the lawmakers are intentionally exaggerating this fact or simply got it wrong. The press release claims the bill will let consumers "control their own online experiences instead of being manipulated by Big Tech's algorithms and analytics." Co-sponsor Jerry Moran (R-KS) says it would make companies "offer certain products and services to consumers free of manipulation. "

But there’s lots of room for manipulation without hyper-personalized search or feed results. Even without targeting, nothing stops companies from delivering inflammatory content that encourages negative engagement, one of the biggest criticisms or Facebook and YouTube. The bill also allows personalization based on users ’friends lists, video channel subscriptions, or other knowingly provided preferences, which would allow for a pretty significant echo chamber. As for “analytics,” the bill doesn't say anything about whether or not companies are allowed to mine personal data for purposes like secret consumer scores.

The proposal still raises interesting questions. If an “input-transparent” sorting system cannot incorporate users' search histories, would it require platforms like YouTube to turn off “watch next” recommendations since your viewing history might include the video you're already watching? Would Uber have to disclose if it charges higher fares when your phone battery is low? Companies use personalization in bizarre ways, and a bill requiring them to disclose those methods could be fascinating.

Advertisements

But those issues are hard to discuss when they're cloaked in the blanket shorthand or "algorithms." If Congress wants to help people understand the web better, members could start by actually explaining what they're doing instead of scoring rhetorical points with buzzwords .