Mark Zuckerberg appeared at the Aspen Ideas Festival yesterday. In keeping with the spirit of the event, Zuckerberg brought some ideas. The big ones:
Facebook was right not to remove the written-off video from House Speaker Nancy Pelosi. Zuckerberg said it should have been considered misleading more quickly, but defended it to leave it behind. (In principle, I agree with him on this.)
"This is a subject that can be politicized very easily," said Zuckerberg. "People who don't like the way something was cut … will sometimes claim that … it was not the true intention or the wrong information. But we exist in a society … where we value and appreciate free expression. "
But Facebook will treat deepfakes differently than other forms of incorrect information. Zuckerberg said the company's policy team is currently considering: "There is a question of whether deepfakes is really just a completely different category than normal false statements in general, and I think there is a very good thing they are."
Facebook cannot only protect against election interference. Zuckerberg was rightly critical of the extremely weak US government response to Russian attacks prior to the 2016 elections, saying:
"One of the mistakes I am concerned about is that the government did not take countermeasures after 2016. The signal sent to the world was that" O.K. We are open to business. "Countries can try to do this and our companies will do their best to try and limit it, but basically there will be no major US government appeal." Since then we have seen increased activity from Iran and other countries, and we are very involved in stepping up the defense. "
On Tuesday, some reports suggested that Zuckerberg had a surprise new & # 39; constitution & # 39; for Facebook. Instead, on Thursday the company released a report provide a detailed overview of the progress the book is making in building an independent review board. The board is connected to the great ideas of Zuckerberg – this is the body that could one day make a binding, independent evaluation of whether a video like the Pelosi fake could stay on the site.
Since the idea was introduced last year, Facebook has held six workshops around the world, including more than 650 people from 88 countries. Among other things, the company carries out a kind of trial version – have participants discuss what they should do with certain pieces of controversial content, as part of the development of a fair process that the board must implement in the future.
The idea is still to build a council of 40 people who will make decisions about assessing content in small panels. But all the details are up for discussion and you can read about the infinitely branching debates that the company is currently in report yourself. It provides a surprisingly powerful way of reading – in the first place it does everything to find and name examples of people who call the board a stupid idea. And it is much nicer than this hesitant, uncertain conversation between Zuckerberg and two prominent professors, who tries to bring a sense of history to the conversation, but usually just increases the historical strangeness of really everything we discuss.
But most of the time it's just wild to see how a public company organizes a miniature constitutional convention in 2019. The biggest problem is that almost anything is possible. To know, from today's Facebook report:
Facebook has proposed that the board members have a fixed term of three years, which can be extended once. Other suggestions include different time limits; staggered agreements; and shorter time limits, given the "rapid pace of change" in content and technology. However, some felt that three years was too long, others thought it was not long enough. The latter believed that more time is needed for members to become acquainted with their responsibilities, as well as the complexity of content governance.
Feedback was divided in the same way about the size of the board. Facebook proposed up to 40 members on the first board, alleged to be global in nature and organized to work and make decisions about cases in panels. Some felt that this number was too small and expressed concern about "docket management" and "caseloads." Others, on the other hand, found the number unmanageable and unmanageable. Others, on a more practical level, suggested that the Board of Directors should have 41 members in the event that a tiebreak was required.
It goes on for 38 pages. (The attachments continue for another 177.)
Many important decisions seem to remain fully in the air. For example, I assumed that it would be an advantage to develop an independent supervisor to enable the Board to set precedents – a kind of case law for future administrative matters to refer to. But according to the progress report, many participants have rejected the idea of precedents at all:
Feedback generally supported is a kind of precedent-setting arrangement. Most expressed the hope that the Supervisory Board could support "any idea of … continuity, an idea of stare decisis" that "could evaluate multiple factual patterns and have a precedential weight." Response from the public questionnaire suggested the same. The majority of respondents (66%) stated that "making decisions in the past is extreme to fairly important", while almost a third (28%) consider decisions from the past as "somewhat important".
Others felt that this precedent "should be carefully considered, since … rules must be overruled that are articulated to reverse panel decisions that later appear to be inconsistent with changing circumstances." It was further argued: "a strict rule of coherence could cause a situation where the first panel discussing a particular topic could set a standard that might not be reviewed later. This will create a sense of arbitrariness and stagnation." Others argued that Since social media is a rapidly changing industry, precedent should not prevent the assessment of future, comparable content. In the end, many argued for balance: a good understanding of a precedent that would help ensure consistency, but would not necessarily be decisive.
The report does not make clear how these questions have been resolved, although it is likely that there have been many. Facebook says a definitive charter for the board will be released in August and it will work to get the first group of panel members up shortly thereafter.
There are at least two good reasons to support Facebook's governance initiative. One is that it demonstrates that the company understands that its power over public speech is untenable, and tries to give some of that power back to the public. Secondly, by giving back part of that power to people, Facebook can in the course of time give more account to its user base. The details are all messy and of course they are – it is a pseudo-constitutional agreement! But the goal still seems worthy to me, and Facebook is moving forward with caution that is as welcome as it is rare.
Twitter will now hide harmful tweets from public figures – but not delete them
In an important step, Twitter says it will now post a content warning about certain inflammatory tweets posted by large accounts, Makena Kelly reports:
Today, Twitter is introducing a new announcement for tweets from public figures that violate community guidelines. If a figure like Donald Trump were to tweet something that would violate Twitter's rules, the platform could notify users of the violation and reduce the reach of the tweet. In recent interviews, Twitter executives hinted that a change like this would come soon.
This notice only applies to tweets from accounts of political figures, verified users or accounts with more than 100,000 followers. If a tweet is flagged as offending platform rules, a team of people from across the company will decide whether it is a & # 39; public interest issue & # 39; is. If this is the case, a light gray box will be displayed before the tweet reports that it is in violation, but it will still be available to users who click through the box. In theory, this could save the tweet as part of the public register without being allowed to promote to a new audience via the Twitter platform.
While Missouri Sen. Josh Hawley hates Facebook's data practices, Hamdan Azhar investigates how his campaign uses the information collected by the service:
Senator Josh Hawley (R-Missouri) Yahoo Finance said that he would not trust Facebook with his money. "I don't trust Facebook with anything," he said.
Only one problem: despite their profound concerns about Facebook, the websites of both senators – sherrodbrown.com and joshhawley.com – have an invisible piece of Facebook technology, called a pixel, that keeps track of when someone visits their homepage and shares this information with Facebook . Hawley & # 39; s website even shares when visitors donate and the exact donation amount. Facebook can then link that information to a person's Facebook account.
Aarti Shahani looks at the Facebook presence of warlord lieutenant general Mohamed Hamdan Dagalo, who allegedly supervised the murder of more than 100 people in Sudan:
Lieutenant General Mohamed Hamdan Dagalo, better known as Hemeti, is a personality of social media. He is also the leader of the Rapid Support Forces – the paramilitary group that has attacked thousands of pro-democracy protesters this month, leaving more than 100 dead. This is a bit of a second act for Hemeti, who also spent time with the Janjaweed, the militia group that was thought responsible for the genocide in Darfur about 15 years ago, according to the Foreign Policy magazine.
On Facebook, multiple pages promote Hemeti as a formidable yet friendly figure of authority.
Emily Birnbaum summarizes a hearing about online extremism this week:
Top technology companies, including Facebook, have claimed that their AI systems are already successfully detecting a huge amount of terrorist and extremist content. But experts at the hearing said that these statements are often exaggerated.
"Context is vital and context can often be difficult for algorithms to detect," said Ben Buchanan, an assistant professor at Georgetown University.
Will Oremus explore the Amazon panopticon, which is now under construction:
Today's Amazon exploits huge parts of the public internet; uses artificial intelligence to compile data for many of & # 39; the world's largest companies and institutions, including the CIA; follows user behavior when creating detailed profiles for targeted ads; and sells cloud-connected, A.I-powered speakers and screens for the home. It has bought a company that makes network Wi-Fi routers that have access to our private internet traffic. Through Ring's subsidiary, Amazon, surveillance cameras are placed on the doorbells of millions of people and are invited to share the images with their neighbors and the police on a crime-focused social network. It sells face recognition systems to police and private companies.
Tomorrow's Amazon region, as outlined in patents, tenders and marketing materials, could be even more ubiquitous. Imagine Ring Doorbell Cameras that are so ubiquitous that you cannot walk on the street without sending alerts to your neighbors and police. Imagine that these cameras have built-in face recognition systems and that they can work together as a network to identify suspicious people. Imagine your Ring surveillance cameras for cars and delivery areas, ring baby monitors in nurseries, and Amazon echo devices everywhere, from schools to hotels to hospitals. Now imagine that all of these Alexa speakers and displays can recognize your voice and analyze your speech patterns to tell you when you are angry, sick or considering a purchase. A 2015 patent application from the Telegraph last week described a system that Amazon called "surveillance as a service, "Which seems like a suitable term for many of the products it already sells.
EU should ban AI-driven citizen scoring and mass surveillance, experts say
Europe is on its way to block any future implementation of a social credit system, James Vincent reports:
A group of policy experts convened by the EU recommended banning the use of AI for mass surveillance and mass "scoring of individuals"; a practice that may involve gathering varied information about citizens – everything from criminal records to their behavior on social media – and then using it to assess their moral or ethical integrity.
The recommendations are part of the EU's ongoing efforts to establish itself as a leader in the so-called "ethical AI." Earlier this year, it issued its first directives on the subject, which stated that AI should be deployed in the EU in a reliable and "People-oriented" way.
The new report offers more specific recommendations. These include identifying areas of AI research that require funding; encourage the EU to include AI training in schools and universities; and suggest new methods to monitor the impact of AI. However, the document is currently only a series of recommendations and not a blueprint for legislation.
Ellen Cushing obtains audio from a meeting where the co-founder of the household goods retailer is unaware that the dividing line between business and politics is rapidly crumbling:
His argument is a cousin of those who had long held many of his colleagues in the technology industry: that they are not really political entities, but simply value-free conveyors for whatever service they offer – short-term rentals, attractions, community , connection, information, entertainment. That their enormous scale, multiplied by the broad spectrum of beliefs of their users, makes moderation of any kind, so that Sisyphean and therefore subjectively is a task that is the only possible solution to allow any idea, or every customer.
But as my colleague Alexis Madrigal points out, the idea of the impartial platform is die before our eyes, if it ever really existed: "Some things could not be said. Some advertising types are preferred by advertisers and companies. The algorithms they use to sort and promote content have prejudices." In other words, you simply cannot order so much information without making any judgments.
Anthony Townsend investigates why Google's plans to build a new kind of urban renewal project in Toronto have led to outrage among the locals. It comes down to trust:
Data management has been a lightning rod because it is new and scary. Early on Sidewalk put more energy into figuring out how the gas holders of the robot would work than how they could check the data that it and others would collect in the proposed neighborhood. As part of Alphabet you would think that this would be a source of unique added value compared to, for example, a conventional development. Not so – the company's original proposal in 2017, including hundreds of pages, pasted on a 2-page memo to CYA on the subject. It didn't work and the late attempts to fill the gap only led to more roadside mistakes and little to calm critics.
More important questions and criticisms have been raised about how Waterfront Toronto handled Quayside's bidding process and its transparency. Existential questions for Canadian cities about the shifting line between public and private delivery of government services are also on the table. None of these have been satisfactorily addressed by Sidewalk, and the number of elected officials opposing the project has grown as a result.
Elizabeth Dwoskin reports that a content moderator was fired after posting texts from "Factory" and "The Promised Land" on an internal forum. Also:
On Thursday, a group of a dozen moderators published a new letter that was reviewed by The Washington Post on Facebook & # 39; s internal Workplace forum, demanding better pay and a review of confidentiality agreements that they claim cannot seek clinical help to address the traumatic consequences of the job, among other things. The moderators work for an Accenture subsidiary in Austin.
Celia Chen visits the internet addiction treatment centers in China:
The center is run by Tao Ran, a former colonel of the people of the liberation army, who led army psychology units. The center is one of the earliest places in China to diagnose and treat internet addiction and is said to have developed treatment protocols used in other parts of the country. .
The facility consists of several buildings that serve as canteens, dorms and treatment rooms, arranged around an open-air courtyard that serves as a basketball court and where patients come together to practice. No electronic devices are allowed.
Samantha Cole writes about a $ 50 app called DeepNude, which "disagrees with the idea that deepfakes were about everything except that they claim ownership of women's bodies."
The software, called DeepNude, uses a photo of a dressed person and creates a new, nude image of the same person. It exchanges clothes for bare breasts and a vulva and only works on images of women. When Motherboard tried to use an image of a man, he replaced his pants with a vulva. Although DeepNude works with different levels of success on images of fully clothed women, this seems to work best on images where the person is already showing a lot of skin. We tested the app on dozens of photos and achieved the most compelling results on high-resolution images of Sports Illustrated Swimsuit issues.
Adam Mosseri talks to Gayle King about, among other things: a Facebook break:
"I think it's important to be really clear if you believe we should be divorced, why and what problem it will solve," he said. "If you look at the issues I am most focused on, such as bullying or self-harm or electoral integrity, all those issues become exponentially more difficult for us on Instagram to address when you split us up."
Farhad Manjoo goes to a Facebook party in Cannes Lyon:
There is clearly something strikingly icky about the excess on display. One morning last week, everyone in Cannes woke up with The Verge's investigation into horrible working conditions at a contract facility that hires moderators to check Facebook. It was a study in contrasts: the moderators complained about bathrooms covered in faeces and menstrual blood. In Cannes, Facebook bought part of the beach and built a coffee bar, meeting room and private boat launch to entertain its customers.
It is not true that the internet eliminates every job for people. There are people everywhere in the social media supply chain. Some of them suffer. Others get too upset. The internet has changed everything. It hasn't changed anything either.
Twitch launches subscriber-only streams, but only for video makers who do not break its rules
Twitch gives its makers a new root with which they can lure paying subscribers, reports Julia Alexander:
Twitch gives its good streamers a chance to offer a new, VIP-like feature to their most loyal viewers with streams intended for subscribers only.
The new feature does exactly what the name suggests: any Twitch Affiliated or Partner maker can choose to broadcast exclusively for moderators, VIP & # 39; s and subscribers. This comes at no extra cost to the subscriber in addition to the minimum $ 5 monthly fee they pay to support the streamer. Fans who are not subscribed will be greeted with an example of a broadcast before being asked to subscribe to a channel.
After conducting interviews two of his top drivers, Ben Thompson calls Libra & # 39; a bad idea & # 39 ;.
In my opinion, money – which is ultimately the medium that makes society work, especially a capitalist – has the same high stakes. This means that the disadvantages must be weighed more heavily than the upsides, which means that less efficiency and more accountability are preferable to the opposite. And that by extension means that a currency that is managed, if not by a single company, at best a collection of them, is a bad idea.
To be sure, all of these objections apply to a reality that lies far in the future, if it comes at all. However, by the time that future dawns, it will be too late to raise it.
Facebook Libra probably won't help people without bank accounts
Half of all adults who do not have a bank account live in seven countries, according to a report quoted by Facebook. Elizabeth Lopatto says that this could limit Libra's scales to lift people out of poverty:
Facebook is banned in China. Some countries, such as Pakistan, Indonesia, and Bangladeshhave temporarily blocked Facebook for certain periods, potentially limiting the effectiveness of money associated with the app. Facebook mentions this as a risk factor for its activities quarterly declaration: "Public authorities in other countries may attempt to restrict users' access to our products if they consider us to be in violation of their laws or a threat to public security or for other reasons, and certain of our products are restricted by governments in other countries. countries from time to time. "
That is not all: many of these countries have laws on cryptocurrency. (Yes, I know it is debatable whether Libra qualifies as a cryptocurrency or not, but Facebook calls Libra a cryptocurrency, so I assume that cryptocurrency laws apply.) India & # 39; s current regulations average Libra cannot work in the country. Pakistan is consider regulation for cryptocurrencies, but they are currently prohibited. Cryptocurrency is also implicitly prohibitedin Bangladesh and China.
And finally …
Brad Esposito talks to people who participate in my favorite current trend in Facebook Groups: pretending to be extremely old:
In the group, Snider helps people manage post-text images on Facebook that kill their & # 39; son & # 39; regret in honesty ("My son is dead"), they share gifs of the American flag in faux patriotism, the words "Flood Facebook with our flag!" encouraged along the top. Often it is just someone who replicates the ham-fisted way that the older generation can often find by using the basic functions of Facebook, and by asking an army of commas what the acronym "wyd" means ("Is this a gang language? ").
(Yes, it's a common language.)
Talk to me
Today I invite you to send me tips, comments, questions and your nominations to the Facebook Supervisory Board: firstname.lastname@example.org.