Advertisements
Facebook could pay billions after losing the privacy appeal of face recognition

Programming note: Zoe and I both have an assignment this week, and The interface is free on Thursday, while we are working on some special reports. The silver lining is that Monday's number will be very long!

Advertisements

We talked about yesterday whether politicians should be able to lie in their Facebook ads. I argued that this should be: Facebook ads are public and searchable, and if a politician or political party tells lies, that seems an important and useful thing for a democracy to know. Facebook is big and the CEO is not accountable to any electorate, so I would prefer that the company is not the political speech referee.

However, many readers see things differently, so I wanted to broadcast a few of your recordings.

The most common response I received was a kind of cake-and-eat-it-too argument in which citizens would have to insist on (1) Facebook to be aborted, but (2) referee political speech until that happens. Here's a reader take:

Regarding Facebook, you said: & # 39; You worry about the huge size and influence of Facebook – and I do! – while it also demands that it be the political speech referee, a strange contradiction seems. "

I don't think it's a contradiction at all. I believe that regardless of the size of a company, efforts should be made to eliminate or at least label incorrect information, including and perhaps especially in political advertisements. In my opinion, those two issues can easily coexist.

I think that is essentially correct, although it does not really address my greater concern, namely the gigantic, irresponsible company that is the referee of what politicians can say.

Another common answer was that it all seems a bit too useful – Facebook can wash hands and check the facts for some of the hardest questions it will face, and get all the profits? Here's another reader:

Now that the company publishes ad content and then has it examined by external fact checkers, the process may be more democratic and fairer than when it is done by Facebook or the state, but it also means that there is always a possibility that false advertising has serious implications because it is broadcast on its platform, even if it is invalidated later.

In this sense, this means that Facebook is practically reaping the benefits of such a lax advertising policy (attracting a wide range of customers and the money from publishing advertisements) and also avoiding the responsibilities and costs. in connection with actually taking decisions proactively.

Another reader said it more concisely:

If it is too difficult to ensure that political ads are not full of lies, they should not accept political ads. Something like a supermarket that does not sell food that they are not sure will not give you food poisoning.

Advertisements

I think this criticism is actually fairly honest, and it is worth remembering that Facebook once considered banning political advertising as not worth the effort. (According to Reuters, it generates less than 5 percent of the company's sales.)

But all of this is not really related to my greater frustration here, namely that people seem to hold Facebook responsible for the lies of politicians, while instead we could hold politicians accountable. I get the fear that we are living in a post-truth world where people just believe whatever their party's Facebook ad tells them to believe, but it also seems defeatist and more than a little patronizing.

It happened that Mark Zuckerberg discussed how the company moderates political speech in the leaked audio that was obtained by The edge. In this section, which has not been published before, an employee asked whether Facebook should model its content policy strictly after the first amendment. (A senator recently suggested making this the law of the country.) Zuckerberg says no, that most people want the company to go far beyond the first amendment. In the remainder of his answer, Zuckerberg describes the difficulty in making decisions about what is wrong information when it comes to a topic such as immigration into Europe, and suggests that he accepts criticism here, regardless of what he does.

He talks about moderation in general, not about Facebook's decision to avoid these calls on political ads. But his thinking here adds some color to why he would make that decision:

Mark Zuckerberg: In general, I don't really think people want us to manage content. There are approximately 20 categories of harmful content that we target. They are all different. Everything from terrorist propaganda to bullying to incitement to violence, bloody content to pornography. … 18 out of 20 categories are not that controversial. There is some controversy and each is exactly on the edge of how you set the policy. But by and large, (they) are not the thing that people are focused on.

There are two categories that are politically sensitive, and they are hate speech and misinformation.

And the issue about this that we encounter in hate speech … many people think we should be more aggressive in moderating content that is offensive or would in fact make certain groups of people unsafe. And then there are other groups on the other side of these debates who feel that they are conducting a legitimate political discourse.

Advertisements

It is always difficult to talk about this in the context of your own political environment. So I find it a little easier to make this unpressurized, and to think about some of the European debates on migration and some of the challenges of integrating a large number of people who have come to these different countries Flee Syria and other places. The debate that is going on is so good, some things are becoming too general and feel hateful, some people on the other hand (say) "Well, I'm trying to discuss the real issues around … trying to get many people into one at a time society. ”We must be able to conduct these debates, where is the line?

That is very difficult, and we are right in the middle of it. I don't think anyone says that we should not do that, that we should follow the (first) amendment. But that is really a difficult balance.

The other about incorrect information, I find really difficult. Because on the one hand I think everyone agrees in principle that you don't want the content that gets the most distribution to be blatant hoaxes that people are cheating on. But the other side of the debate about this is that many people express their lives and experiences by telling stories, and sometimes the stories are true and sometimes not. And people use satire and they use fiction … and the question is, how do you distinguish the boundary between satire or a fictional story? Where's the line?

It is not that it is 100 percent difficult, but there are new nuances in doing this. Many people feel in a world where many of the people who arbitrate what is wrong information and do factual checks tend to ignore the fact that the ability to express something that they feel really stands in the way and that corresponds with their lived experience. So you want to do both, right? You want to make sure that you give people a voice to express their polite experience in a civilized way, and you want to make sure that the things that go viral don't … flagrant, flagrant hoaxes that will be harmful.

So those two are by far the most charged. But in general … No one has come to us to say, "Please allow terrorist propaganda to come to your service." Even the people who submit the bills for a debate in Congress say they want more openness on the platform. So I don't think it's going that way. I just think the reality is that we are stuck in this nuanced area, and that it continues to come from many different directions as we try to navigate this as well as possible.

The ratio

Advertisements

Today in news that the public perception of the technology platforms could influence.

Trending up: Google Android device manufacturers must now include their digital well-being functions, including parental control and screen time monitoring.

Trending down: Like it Facebook for the, twitter was caught with phone numbers given for two-factor authentication to target ads to people.

Trending down: Google Contractors in London threaten to strike due to unpaid bonuses, job losses and poor working conditions.

ruling

Hong The protests in Hong Kong still have wrinkle effects around the world because companies with business interests in China are struggling to walk the line between allowing freedom of expression to employees and customers without killing the Chinese government.

Advertisements

Today Marco Rubio called on lawmakers to investigate ByteDance TapTok, with reference to evidence that the Chinese company is censoring content in America. Tony Romm and Drew Harwell The Washington Post have the story:

In a series of tweets, Rubio added that he has asked the Trump government to "fully enforce anti-boycott laws" that prohibit a person or "US subsidiary companies of Chinese companies" from "complying with foreign boycotts that US companies want to force themselves to conform to the views of # China. "

Rubio & # 39; s tweets reflect waves of criticism directed at American and Chinese technology companies for suppressing content that supports pro-democracy protesters in Hong Kong. TapTok has received quite a bit of this control due to its popularity and obscure content moderation policy:

The lack of content of TikTok regarding the Hong Kong protests that Chinese leaders have attempted to undermine has raised fears that the platform will censor ideas that the government wants to suppress. In response, TikTok & # 39; s Beijing-based parent company told The Washington Post last month that the US platform of the app was not influenced by the Chinese government, and that the lack of protest images could be related to the image of app users as a place for entertainment, not politics. It refused to share additional information about its content moderation practices.

On the other hand, Apple takes heat from the Chinese state media for allowing an app that follows the Hong Kong police to the App Store. After the app, HKmap.live, was initially blocked, Apple allowed it in the App Store last week. It uses crowdsourcing to point out to protesters the location of law enforcement. Apple, which is more dependent on China for revenue and production than any other technology giant, is doing the right thing here – and it can cost them. (Verna Yu / The Guardian)

Once again, Apple banned the Quartz news app from the Chinese app store. Quartz has been keeping a close eye on the protests in Hong Kong.

Activision Blizzard a player is suspended from his game fire plate who expressed support for demonstrators in Hong Kong. The move came after Ng Wai Chung, known as Blitzchung, dressed in a gas mask and glasses and using a pro-democracy protest slogan during an interview after the game. He can no longer participate for a year now. Some employees left their office on Wednesday. Elsewhere, Fortnite maker Epic games used the moment to reassure players that it would not forbid them for political speech. (Gregor Stuart Hunter and Zheping Huang / Bloomberg)

Advertisements

Mark Zuckerberg hands over witnesses to the House Financial Services Committee Scale on October 23. This is the first time that one Facebook executive has testified before the congress since David Marcus spoke to lawmakers about the company's planned cryptocurrency in July. (Akela Lacy / The interception)

The news comes just as legislators exert pressure Visa, MasterCard and Stripe to reconsider their involvement with the association. In a letter to the company's CEO & # 39; s, Sens. Brian Schatz (D-HI) and Sherrod Brown (D-OH) for the many risks of the project, including facilitating criminal and terrorist financing and destabilizing the global financial system. (Russell Brandom / The edge)

To top it all off, the Libra Association & # 39; s product manager, Simon Morris, quietly left the group in August for undisclosed reasons. I suppose the reason was not "it's going really well and I just have nothing left to do here." (Alex Heath / The information)

Joe Biden asked Facebook to refuse Trump campaign ads containing misleading information about his family's corrupt business transactions with Ukraine. Facebook said no. (Lauren Feiner / CNBC)

A Senate intelligence committee issued a report on the Russian election involvement of 2016 and called technical companies such as Google and YouTube to help disseminate incorrect information. Previous reports were primarily focused on twitter and Facebook. (Georgia Wells, Robert McMillan and Dustin Volz / The Wall Street Journal)

The Foreign Intelligence Surveillance Court ruled that an FBI program aimed at foreign suspects violated the rights of US citizens by long collecting their personal data with data from foreign targets. The program ran from 2017 to 2018 and included the collection of e-mail addresses and telephone numbers. (Zachary Evans / National overview)

Industry

⭐ An anti-Semitic shooting in Germany was streamed live Nervousness. The incident could prolong the pressure on technology companies to catch these crimes as they occur and do more to remove repetitions from their servers. Makena Kelly:

Today's attack reflected the massive March shooting of Muslims in Christchurch, New Zealand – streamed on Facebook Live. The video, which lasts about 35 minutes today, shows a man who photographs two people and tries in vain to break into the synagogue. He also gives a short speech in the camera, railing against Jews and denying that the Holocaust happened. Two people were found dead in today's attack and German law enforcement has raised the possibility that multiple attackers were involved. Only one offender appears in this video.

It is unclear how many people have viewed the first stream or how many copies may have been archived at Twitch – which is owned by Amazon – or at other sites. Extremism researcher Megan Squire reported that the video was also distributed via the Telegram encrypted platform, with clips viewed by approximately 15,600 accounts. The shooting in Christchurch was watched live by only a few people, but reloaded about 1.5 million times after the attack – so dealing with the aftermath will be a major concern.

Americans have a patchy understanding of digital security, according to a new research from Pew. Only 28 percent can identify an example of two-factor authentication – one of the most important ways to protect online accounts. And almost half were not sure what private browsing is. (Emily A. Vogels and Monica Anderson / Pew)

Instagram changed Throwback Thursday to an official position. It becomes & # 39; on this day & # 39; and allows users to share any photo they have posted on the same calendar date in the past. The launch is part of the app's new "Create" mode, which allows users to play with interactive stickers, drawings and text without first having to take a photo. (Josh Constine / TechCrunch)

YouTube has launched a new tool that allows politicians to book advertising space months in advance. The tool can be valuable for politicians who want to take advantage of YouTube's targeted advertising options before voting starts in February in Iowa and New Hampshire. (Emily Glazer and Patience Haggin / The Wall Street Journal)

Advertisements

YouTube closely passed Netflix as the # 1 video streaming platform for teenagers, according to a survey by investment company Piper Jaffray. Netflix still beat Hulu and Amazon with a comfortable margin. (Annie Palmer / CNBC)

Microsoft & # 39; s Airband initiative, started in 2017 to improve internet access in rural areas in the US, is now spreading to Latin America and Sub-Saharan Africa. The goal is to get 40 million more people connected to the internet by July 2022. (Jon Porter / The edge)

And finally…

Coleen Rooney accused someone of Rebekah Vardy's & # 39; s Instagram Report on selling fake stories about her to the gossip magazines and it is so dramatic

Generally I try to stay out of conflicts between the women and girlfriends (WAG & # 39; s!) Of British football players. Even when one of them perhaps secretly leads stories about another to the gossip magazines. But then Coleen Rooney revealed her devilishly clever method of exposing her traitor. She spent five months posting fake stories on her Instagram account and limited the audience for those stories to one person – partly WAG Rebekah Vardy.

Normally I would quote this story, but my favorite element of this drama is not recorded in this piece: Coleen is now played on Twitter with her own hashtag: #Wagatha Christie.

Talk to us

Advertisements

Send us tips, comments, questions and more misleading political advertisements: casey@theverge.com and zoe@theverge.com.