Congress is lagging far behind in algorithmic misinformation


One of the scariest concerns about social media is the idea of ​​algorithmic misinformation – that the recommendation systems built into platforms like Facebook and YouTube can quietly elevate the most harmful and disruptive content on the network. But during a Senate hearing on Tuesday entitled ‘Algorithms and Amplification: How Social Media Platforms’ Design Choices Shapes our Discourse and Our Minds,’ lawmakers didn’t seem any closer to finding a solution to the violence platforms like Facebook. internally recognized they cause in the real world.

At the top, Senator Chris Coons (D-DE) said “there is nothing inherently wrong” with how the witnessing companies, Facebook, Twitter and YouTube, use algorithms to keep users engaged on their platforms. He said the committee did not weigh actual legislation and that the hearing was intended to be a listening session between the lawmakers and the companies.

It’s a stark contrast to more focused issues like Facebook’s acquisition practices or the Apple App Store fees, which have seen immediate action from courts and regulators, often triggered by facts from Congress. But the problem with the algorithm is more thorny and more difficult to tackle – and increasingly, legislators seem to be reluctant to tackle it.

The slow walk of the Congress was especially notable in comparison to the expert panelists, who presented algorithmic disinformation as an existential threat to our system of government. “The biggest problem facing our nation is misinformation to scale,” Joan Donovan, research director at Harvard’s Shorenstein Center on Media, Politics and Public Policy, said Tuesday. “The cost of inaction is the end of democracy.”

Donovan expressed his frustration at the hearing on Twitter, saying lawmakers should have pushed the platforms to learn more about the specific mechanisms used to rank content. “The companies should have answered questions about how they determine what content to distribute and what criteria are used to moderate,” said Donovan. “We could also have explored the role that political advertising and source hacking play on our democracy, or the curatorial need for information integrity models.”

Still, the general impression was that the federal government’s approach to algorithms has not been developed much since 2016 and is unlikely to happen anytime soon. “We’re still talking about conversations we had four years ago about the spread of misinformation,” said Tristan Harris, co-founder and president of the Center for Humane Technology.


Congress first began to hear these arguments about how algorithms amplify extremist content following the 2016 presidential election and the rise of the Republican Party’s Trump wing. In recent years, lawmakers have held hearings. They have sent letters. But in contrast to the spirited exchanges and deluge of bills we’ve seen around antitrust and content moderation, they’ve avoided regulating algorithms and generally chose to politely push these companies in the right direction.

It’s something the lawmaker who chaired Tuesday’s hearing, Senator Chris Coons (D-DE), acknowledged in an interview with Politics last month. Coons said, “Congress rarely regulates the Internet and social media, and when it does, everything it puts into law is often stalled or in place for years.” He continued: “And in an area where technology is evolving quite quickly, sometimes oversight hearings, letters [and] Conversations with the leaders of major social media companies could lead to those companies changing their practices arguably faster than we can envision in legislation. “

This theme was clear at Tuesday’s hearing and it is unclear exactly how the committee can proceed with regulating social media algorithms.

The closest congress in recent weeks is a bill submitted in Parliament.

In March, representatives Anna Eshoo (D-CA) and Tom Malinowski (D-NJ) introduced the “Protect Americans from Dangerous Algorithms Act. The bill would amend Section 230 of the Communications Decency Act to override liability immunity for a platform if the algorithm reinforces content associated with a civil rights violation.Critics claim it does more harm than goodWhile the bill has received some partisan support in the House following the Capitol riot, the Senate has not adopted it or offered its own solution.


Probably talk more. “I am encouraged to see that these are issues of general interest and that I think there could be a largely two-pronged solution,” Coons said at Tuesday’s hearing. “But I’m also aware that we don’t want to unnecessarily limit some of the most innovative, fastest-growing companies in the West. More discussion is needed to find that balance. “