About a year ago, after Mark Zuckerberg put the idea on a podcast with Ezra Klein, I claimed that Facebook needed some sort of Supreme Court. "A perfect content management regime is probably too much to hope for," I wrote at the time. “But Facebook can build and support institutions that help it strike a balance between competing notions of free speech and a secure community. Ultimately, the question of what belongs on Facebook cannot only be determined by the people who work there. "
In the months that followed, Facebook's vision for the Supreme Court – which decided to give the less glamorous name of an independent supervisory board – quickly came into the picture. Today, the company offered us the most details we had about the plan so far. The company has revealed it in a series of blog posts draft charter for the organization, summarized some of the most important decisions that have been incorporated into itand described some reasons for the design. Finally Mark Zuckerberg a letter published in which he reiterated the need for some kind of court for Facebook:
We are responsible for enforcing our policies every day and we make millions of substantive decisions every week. But in the end, I don't believe that private companies like ours should make so many important decisions about talking about themselves. That is why I have called on governments to set clearer standards for harmful content. That is why we are now giving people a way to appeal against our content decisions by establishing the independent Oversight Board.
Facebook's independent monitoring board is a subject that I find very exciting, but I understand that the idea makes your eyes go over you. Viewed from a distance far enough away, the idea can seem rather strange. The primary authority of the board is to decide which Facebook posts will stay and which will come down, and you can imagine all kinds of small disputes on which the board will be weighed.
But we now also know that Facebook and its moderators are currently guarding the boundaries of speech on a huge part of the internet. And for those who feel that the company has made the wrong decision about a job, there is very little story historically. You can enter a small text box and pray, but it is unlikely that you will ever receive much more than an automatic message. The system may work in most cases, but it never felt special only – that is, open and responsible.
As explained in contemporary materials, the board is designed to create a sense of justice that never existed before. The board will have a meaningful independence from Facebook and although its decisions are not legally binding, Facebook is highly encouraged to follow its recommendations. The decisions will be public and serve as precedents – meaning that a kind of case law will develop over time. And the board will be able to go beyond decisions to advise on policy making – Facebook will commit itself to respond in public.
“I have no idea whether the board gets legitimacy. Maybe it will disappear overnight, like Google's AI Ethics Board, & # 39; said Kate Klonick, a law professor who has been studying the Facebook plan in recent months, in a Twitter thread. "But at least, so far, it's a greater and more rigorous dedication of time, money, and platform power than all of the above."
Why would Facebook solve this itself? I think Zuckerberg is sincere when he says he doesn't want to make important speech decisions himself. There is very little benefit to it – if you manage a platform that uses the entire world, any high-profile speech decision you make can alienate millions of people. It is better to entrust those decisions to a board and to give it just enough independence that you can credibly say that you have nothing to do with the decision.
On the other hand, we live in a time when trust in institutions is declining. Many people are not inclined to trust Facebook for various reasons; it is not clear how an entity as strange as the Facebook monitoring committee can gain legitimacy in the eyes of the public. And even if it is, the highly political nature of many board decisions will make it a lightning rod for controversy. It is hard to imagine that Facebook will not suffer any collateral damage.
That said, the design of the Facebook board is thoughtful and even smart. Board members with domain expertise, who make decisions in public, can give legitimacy to the content moderation activities of Facebook that it has never had before. And even if it doesn't meet the highest ideals of Facebook, this board charter looks much better on the surface than the system we live under today.
Today in news that could influence public perception.
Trending up: breakage publishers can pay for news content, with which high-quality content is offered to an app whose news offer has been largely low and shallow.
Trending down: The falcon for some of Facebook & # 39; s policy missteps in the past two years has never taken the fall.
⭐ The Facebook page "Vets for Trump" was taken over by a North Macedonian businessman and the owners couldn't get it back for months. After taking over the page, the Macedonians asked the 100,000 followers of the page for donations. (Craig Timberg / The Washington Post)
Foreign actors – some looking for profit, some looking for influence and some looking for both – have not noticed in their efforts to reach American voters through online information sources such as Facebook, Twitter and YouTube. Veterans and soldiers with active service are especially valuable targets for manipulation because they vote at high speed and can influence others who admire their service.
“Veterans as a cohort are more likely than others to participate in democracy. That includes not only voting, but applying and getting others to vote, ”said Kristofer Goldsmith, lead investigator for Vietnam Veterans of America. He was the first to discover the takeover of Veterinarians for Trump during research for a report released Wednesday that documents widespread, ongoing efforts by foreign actors to scam and manipulate veterans through Facebook and other social media.
Facebook updated its policy for dangerous persons and organizations following the massacre in Christchurch in New Zealand. The company will now focus on content from hate groups with the same AI techniques used against ISIS and Al-Qaeda. Facebook has also extended its definition of "terrorist organization" to include groups that even attempt violent acts against civilians, such as white supremacists. (Facebook)
Facebook deleted 244 accounts and 269 Pages for coordinated inauthentic behavior from Iraq and Ukraine. The people behind the schedule used fake accounts to enlarge content and manage pages. In Iraq they mostly reported on religion, Saddam Hussein and American military action. In Ukraine they published on celebrities and sports. (Nathaniel Gleicher / Facebook)
Moderating Facebook content remains very traumatic for some contractors – months after the company committed to improve working conditions. This story contains conversations with current and former moderators in Berlin. (Alex Hern / The Guardian)
Facebook executive Elliot Schrage stepped down as a policy leader after the Cambridge Analytica scandal – but he never left the company. Schrage has remained as vice-president of special projects, where he is currently working on the Scale mess. (Kurt Wagner / Bloomberg)
President Trump returned to the Bay Area for the first time since his election for a fundraising in Palo Alto. Only 5 percent of donations made by technicians to presidential candidates have gone to Trump since 2017. (Rebecca Ballhaus and Chad Day / The Wall Street Journal)
Russia committed a "stunning" breach of FBI communication, which resulted in the removal of diplomats from the country in 2016. The Russians tried, among other things, to prevent the agency from detecting spies. (Zach Dorfman, Jenna McLaughlin and Sean D. Naylor / Yahoo)
Surveillance in the UK, already much larger than in most Western democracies, is increasing even further thanks to face recognition software installed in some of the country's many public surveillance cameras. In May, San Francisco went the opposite way and completely banned this technology. (Adam Satariano / New York Times)
⭐ Facebook collaborates with Ray-Ban parent company Luxottica to develop augmented-reality glasses. The company hopes to sell something to customers by 2023, according to Salvador Rodriguez of CNBC:
The glasses would allow users to take calls, show information to users in a small display, and stream their viewing point live to their friends and followers on social media.
Facebook is too develop a voice assistant for artificial intelligence that would serve as a user input for the glasses, CNBC reported earlier. In addition, the company has experimented with a ring device that allows users to enter information via a motion sensor. That device is codenamed Agios.
The company has hundreds of employees in its Redmond offices working on AR glasses technology, but so far Facebook has struggled to reduce the size of the device to a form factor that consumers will find attractive, a person told the device worked CNBC.
Snapchat 3D camera mode rolled out, which adds a new dimension to photos. Users with an iPhone X or newer can apply 3D effects, lenses and filters to their photos. The effect requires an iPhone X or newer and replicates an upcoming feature in the next iteration of Spectacles, which will be available soon. (Ashley Carman / The edge)
Snapchat is also researching a new bet on news, where publishers are searched for a special news tab in the app. My dream that high-quality news publishers are paid for what are essentially transport costs through the platforms comes quickly into view. (Facebook does something similar.) (Alex Heath and Jessica Toonkel / The information)
In a beautiful essay, Tavi Gevinson questioned her own fame, starting with a fashion blog at the age of 12 and a magazine at the age of 15, and the conflicting relationship with Instagram she developed along the way. Make time for this. (Tavi Gevinson / The cut)
We have reviewed Apples new phones and concluded that the iPhone 11 is the phone that most people should buy (if they plan to upgrade). However, I have the green iPhone 11 Pro. (Nilay Patel / The edge)
Repo men scan and upload the locations of each car they pass to the Digital Recognition Network – a surveillance database of 9 billion license plate scans that are accessible to private investigators. Although the network is not managed by the government, law enforcement has access to it. (Joseph Cox / Vice)
Scientists predict that seawater could rise 4 feet or more by 2100, causing the Silicon Valley technical headquarters to be flooded. But Google and Apple are among those who still invest a lot in real estate in the area. (marketplace)
Indiana University Observatory on Social Media introduced a tool that claims to immediately use fake accounts to manipulate public opinion. It's called BotSlayer and I'm curious how well it works – detecting bots is notoriously difficult. (University of Indiana)
And finally …
Eve Peyser talks to people who adjust their bodies in extreme ways for Instagram followers:
Earlier this year Louise – who eventually wants to become a reality TV star – appeared a bombastic performance Dr. Phil, in which she played a cartoon of herself, proclaim yourself a & # 39; meager legend & # 39; and with the comment: & # 39; I'd rather die than live ugly. & # 39; She has earned more than 70,000 followers since her TV appearance and now claims to earn around $ 3,000 a month on products such as Flat Tummy Tea.
"I could say that social media are putting a lot of pressure on me, but I am grateful for that. I wonder if Instagram didn't exist, what would I look like now?" She mused. the money. "When I had 200 followers, I was still in the app all day watching people," she explained, so why not respond to that?
I can think of a few reasons!