On Friday, the Washington Post reported that a video pretended to have landlord Nancy Pelosi swing her words collected millions of views and shares on social networks, where Facebook points the way to engagement. In reality, the (still unknown) creator of Pelosi's video had slowed down images to 75 percent of the speed of the original, while adjusting the pitch of her voice to make it sound more natural. The result was catnip for conservative partisans who would like to paint the congress woman like a drunk man.
The rapid spread of the video on the internet led to new fears that our politics were about to change radically and irreversibly through the introduction of digitally altered propaganda. Over the weekend, the situation has produced an extraordinary amount of commentary – on what it suggests about our future, and on what social networks should do about it.
Facebook turned its default disinformation playbook, labeled the video as false and offered an opaque pseudo-warning to anyone trying to share the video that informed the user that "additional reporting available. "Monika Bickert, who is in charge of the policy on Facebook, went on Anderson Cooper 360 to defend this approach.
Cooper asked Bickert why Facebook kept the video. Like Ian Bogost tells in The Atlantic Ocean:
This way of thinking seemed to confuse Cooper, and rightly so. Why would an immediate effect, such as incitement to violence in an acute conflict, be wrong, but a delayed impact, such as damaging the reputation of the woman who is the third in line for presidency, okay?
Once the content exists, Bickert implied, the company supports it as a tool to generate more content. "The conversation on Facebook, on Twitter, also offline, is about the video being manipulated," Bickert responded, "as evidenced by my appearance today. This is the conversation." The purpose of content is not true or false, to be wrong or right, virtuous or bad, ugly or beautiful. No, the purpose of content is to exist, and in that way, to & # 39; conversation & # 39; to inspire – that is, more and more content.
Meanwhile, Axios said the video was ushered in "our sad, new, distorted reality. "Charlie Warzel said that Facebook had become a perfect machine to capture our attention. Kara Swisher said the incident shows "How expert Facebook has become in blurring the boundaries between simple errors and intentional deception, thereby abolishing its responsibility as the main news distributor on the planet." Joshua Topolsky encouraged people to remove Facebook until it is prepared to make editorial judgments.
And on the other hand, commentators were worried about a world in which platforms make editorial decisions without recourse for those whose speech is kept outside borders. "A lot of comments on Pelosi's video are & # 39;not even wrong& # 39 ;, Because no consistent or realistic enforcement standard is proposed other than & # 39; remove things I don't like & # 39; " said Alex Stamos.
While all this was playing, there was a Bay Area TV station KTVU reported on the story of Kate Kretz, an artist who sews Make America Great Again hats to hate speech symbols, such as a Ku Klux Klan hood or Nazi bracelet. Kretz & # 39; s work is intended as a protest against the Trump government's racist policies, but earlier this month Facebook removed its work for violating its community guidelines against hate speech:
At the beginning of May, Facebook removed the images of Kretz from her latest work for violating community standards. The artist protested, repositioned her images, but this time with a disclaimer stating that her art was not a hate speech, and in fact a commentary on hate speech, just like a political cartoon.
Then Facebook disabled her account.
In both the Pelosi and Kretz cases, we see people changing artifacts of political speeches in an effort to influence our politics. Both are protected under the First Amendment. Whether they are protected according to Facebook's community guidelines is more debatable. The spirit of Facebook's rules seems to exclude a distorted propaganda video and to contain photos of a reasonably literal political art. But in practice Facebook did the opposite.
The reason is that Facebook, because of all the consequences it has for politics, is determined to stay above the battle. (Or maybe right next one in the battle, which people can post about it more easily on Facebook.) The company does not understand the difference between a propaganda video and a work of art, because it is not very serious want to. Understanding would be to take on new, expensive responsibilities and open up to new political attacks at a time when it is facing major new regulatory threats worldwide.
Among Facebook executives, this attitude of tense neutrality is the only one possible, regardless of the brick it may encounter in the press as a result. A policy that allows the maximum amount of political speech, with the exception of a small number of exceptions described in a publicly posted document, has a logical coherence that "removing things I don't like" does not.
That is one reason why the apostate brigade might consider developing an alternative set of Facebook community standards for public consideration. I have no doubt that there are better ways to push the boundaries here – to quickly clear malicious propaganda and at the same time promote what is clear art. But someone has to draw those limits and defend them.
Alternatively that could be break up Facebook in its component parts, and allow the resulting Baby Books to experiment with their own standards. Perhaps WhatsApp, stripped of all viral forwarding mechanisms, would find a delayed Pelosi video acceptable if it is shared from one friend to another. Meanwhile, Instagram would quickly detect the rising popularity of the video and ensure that nothing like that would appear on the Explore page of the app, where the company could unknowingly help with distribution like the Facebook Feed Feed algorithm did this time. By making communities smaller, rules can be made simpler.
In the meantime, Alex Alexis Mantzarlis from TED offers four good suggestions for Facebook to implement, which I would like to repeat here in my own words. It should act faster – if centralization is the great virtue of the company, it should use that power to detect such videos and apply fact-checking sources before they get millions of views. Two, it should write its warning pop-ups in plain English. Say goodbye to & # 39; additional reporting is available & # 39; and hello to & # 39; this video has been distorted to change the meaning & # 39 ;. Three, follow users with videos that shared the message before you identified it as fake and offer them the chance to share it v. And finally, share more data with the public and with researchers about the effectiveness of fact-checking.
I don't think the video from Pelosi announces the end times for our informative atmosphere. But I do think that debates like this, about what Facebook is leaving and what is needed, will only get worse as bad actors find new ways to get our attention. I understand why Facebook wants to prevent editorial statements from being made about political videos. But doing nothing is also an editorial review, and one that is increasingly calling social platforms to account.
"The disabled accounts contain two names on Twitter (which) mimicked Republican congress candidates to push pro-Iranian political messages," reports Tony Romm:
Facebook and Twitter said they had turned off a comprehensive disinformation campaign on Tuesday that appeared to have originated in Iran, including two Twitter accounts that imitated Republican Congress delegates and tried to push possibly pro-Iranian political messages.
Some disabled accounts seemed to focus their propaganda on specific journalists, policy makers, dissidents and other influential American figures online. That tactic caused experts to fear that it could mark a new escalation in social media warfare, in which malicious people would steal real identities to spread disinformation outside the web.
A year after the General Data Protection Regulation came into force, Facebook has undergone most of the research, Matthew Wall reports:
Social media giant Facebook and its subsidiaries Instagram and WhatsApp have been the subject of most data investigations in the Republic of Ireland since the new European Union data protection regulation came into force a year ago. (…)
The Irish Data Protection Commission says it has carried out 19 legal investigations, 11 of which focus on Facebook, WhatsApp and Instagram.
Alec Stapp has more data from the first year of GDPR:
€ 55,955,871 in fines
€ 50 million of that was a single fine on Google
281,088 total cases
89,271 reports of data breach
37.0% in progress
I find myself fairly sympathetic towards tech executives who avoid these public shaming situations, because over the past two years they have resulted in nothing but boring grandstanding. Yet every refusal generates a new round of negative headlines. Donie O & # 39; Sullivan and Paula "No Relation" Newton:
Facebook & # 39; s Mark Zuckerberg and Sheryl Sandberg did not attend a hearing in Ottawa on Tuesday, despite receiving calls from the Canadian parliament.
The decision could lead to the leaders being mistaken for the parliament, said the oldest Canadian politician who sent the summons to CNN. The last time a member of the public was despised by parliament was 1913, according to the legislative power.
Maciej Ceglowski has a very nice essay about working on political campaign security:
Trying a modern campaign is just like doing an operation with a scalpel made of anthrax spores. At some point you throw the anthrax scalpel down and say "this is impossible!", Because it disappears in a cloud of deadly dust. But the patient still needs you!
Fascinating Pranav Dixit piece about how the multiplayer shooting game Unknown player's battlefields was banned in parts of India. It may be a consequence of Indians' concerns about WhatsApp and other social technologies, he reports:
Playing a video game seemed a dubious reason for arrest PUBG fans and free internet defenders, but less than a week later, had other parts of Gujarat, including Ahmedabad, the largest city in the state, and Vadodara, the third largest city, forbidden the game, citing similar reasons.
The national hysteria around PUBG is unfolding at a time when Indians are struggling with the effects of rapid technological progress: the deadly spread of rumors about WhatsAppunrestrained harassment on social mediaand dangerous disinformation campaigns. People now demand that technology companies struggle with their effects on users – and yet the specific panic around PUBG and the resulting arrests in Gujarat reflect the blunt response of lawmakers when forces they regard as destabilizing reach. Video game limitations are not unknown; but arrest young men to play them, to raise & # 39; the education of children and young people & # 39; is a serious and questionable method of protecting the interests of young adults.
I post stories about China and facial recognition here in part because this is the world that we are all going to live in, unless other countries are going to regulate things like this. Restless piece by Chris Buckley and Paul Mozur. (And while we are busy, here is another hair-raising face recognition project by a Chinese programmer.)
In an instant, Kashgar, an ancient city in western China, flashed on a large screen on the wall, with colorful icons marking police stations, checkpoints, and the locations of recent security incidents. With a mouse click, a technician explained, the police can retrieve live videos from any surveillance camera or view someone passing by one of the thousands of checkpoints in the city.
To demonstrate, she showed how the system could retrieve the photo, home address, and official identification number of a woman who had stopped at a checkpoint on a major highway. The system searched billions of records and showed details about her education, family ties, links to a previous case and recent visits to a hotel and an internet cafe.
Speaking of the country: the parent company of TikTok helps the government with a censorship campaign. Cate Cadell reports:
"We sometimes say that artificial intelligence is a scalpel, and that a human is a machete," said a content screening employee at Beijing Bytedance Co. Ltd, who asked not to be identified because they are not authorized to talk to the media .
Two employees of the company said that censorship of Tiananmen's crackdown along with other highly sensitive issues, including Taiwan and Tibet, are now largely automated.
Echo Wang and Carl O & Donnell have more information about why the US government is trying to undo the acquisition of a gay hookup app by a Chinese company:
Two former national security officials said the US acquisition feared the US for the potential of data abuse in a time of tense China-US. relationships. CFIUS has paid more attention to the security of personal data. In the past two years, it has blocked Chinese companies to purchase money transfer company MoneyGram International Inc and mobile marketing company AppLovin.
Headquartered in West Hollywood, California, Grindr is especially popular among gay men and has approximately 4.5 million daily active users. CFIUS was probably worried that the Grindr database might contain compromising information about personnel working in areas such as the military or intelligence services and that it could be in the hands of the Chinese government, the former officials said.
Daisuke Wakabayashi examines some of the consequences that the huge shift from Google to contract work has had on the company. More contractors now work for Google than full-timers:
The dependence on temporary help has caused more controversy within Google than with other major tech outfits, but the practice is common in Silicon Valley. Conditional labor accounts for 40 to 50 percent of employees at most technology companies, according to estimates from OnContracting, a site that helps people find technical contract positions.
OnContracting estimates that a technology company can save an average of $ 100,000 per year per US job by using a contractor instead of a full-time employee.
"It creates a caste system within companies," says Pradeep Chauhan, who runs OnContracting.
"Four major German vendor houses are working together to combat the market power of technical platforms," says Jessica Davies. The idea is to compete better with Big Tech by combining more premium advertising stock:
The two new partners house major news titles including Bild, Welt, Business Insider and magazine portfolios, including Die Aktuelle. The additions will increase the online reach of the alliance to a combined monthly unique users of 50 million, according to the German online measurement agency AGOF. Facebook has around 40 million monthly unique users in Germany, according to Statista.
Nobody the Financial times talk here seems very optimistic about the prospects of a TikTok phone. But I was not very optimistic about the fact that ByteDance killed the Musical.ly brand and launched TikTok!
Citizen, the app for location-related crime reports, published that it would not share "your name or other personally identifiable information". But when I did my test, I noticed it repeatedly sent my phone number, email, and exact GPS coordinates to the Amplitude tracker.
After I contacted Citizen, it updated its app and removed the Amplitude tracker. (Amplitude, for its part, says that data it collects is kept private for customers and not sold.)
Absolutely beautiful profile of Robbie Tripp, better known as the Curvy Wife Guy, of Rebecca Jennings. In part it is about how everything about being an influencer is real and at the same time a put-on:
All this to say that Robbie Tripp – who has been in the spotlight in various ways in almost two years since the viral post, including most recently a "curvy girl hiphophymne" and accompanying video clip – has become a sort of avatar for multiple internet phenomena packed in one: the debatable & # 39; awake & # 39; male feminist, the Instagram hustler, the TED talker, the online wife, the milkshake duck. He is a viral meme that stumbled into a much larger discourse and still finds its place in it. But he is determined to make room for himself, despite what is being written about him. Towards the end of the two days I spent with him, Robbie told me: "I have a motto: that which people hate you, do more of."
Here's a good one for the Never Tweet files, from Anna Timms:
The scam started with a real tweet from the bank asking customers to share their customer service experience in an online survey.
Johnson & # 39; s business partner tweeted back to report the difficulties in setting up the new account. The fraudster saw her tweet, Googled her data and called her through her company contact number, posing as a customer of the Metro Bank customer service, called "Neil."
She was told that the call was in response to her tweet and that the bank wanted to rectify the poor service and set up the new business account immediately. She was asked for details of the company as part of due diligence checks required by the bank supervisor and she named Johnson as co-director.
Lily Hay Newman writes about an attempt to make cameras & fraud proof:
The NYU team demonstrates that you can adjust the signal processors on the inside – whether it is a nice digital SLR camera or a normal smartphone camera – so that in most cases they place watermarks in the code of each photo. The researchers propose to train a neural network to feed the photo development process that takes place in cameras, so that the sensors interpret the light that falls on the lens and make a high-quality image of it. The neural network is also trained to mark the file with indelible characters. indicators that can be checked later by forensic analysts if necessary.
"People still don't think about security – you have to get close to the source where the image is captured," says Nasir Memon, one of NYU Tandon's project researchers who specializes in multimedia security and forensic medicine. "So what we are doing in this work is that we create an image that is suitable for forensic investigations, allowing better forensic analysis than a typical image. It is a proactive approach rather than just making images for their visual quality and then to hope that the forensic techniques work afterwards. "
sapiens author Yuval Noah Harari investigates why people are so smart and so stupid at the same time:
The dual nature of power and truth results in the curious fact that we humans know much more truths than any other animal, but we also believe in much more nonsense. We are both the smartest and the most faithful inhabitants of planet Earth. Rabbits do not know that E = MC², that the universe is approximately 13.8 billion years old and that DNA is made from cytosine, guanine, adenine and thymine. On the other hand, rabbits do not believe in the mythological fantasies and ideological absurdities that have enchanted countless people for thousands of years. No rabbit would have been willing to crash a plane at the World Trade Center in the hope that he would be rewarded with 72 virgin rabbits in the afterlife.
Cass Sunstein suggests that we use the word "lapidation" to describe online hate mobs:
The English language needs a word for what happens when a group of people, furious with a real or imagined violation, responds in a way disproportionate to the occasion, destroying the day, month, year or life of the offender .
We could use an old word again: lapidation.
Technically, the word is synonymous with stoning, but it sounds much less violent. It is also obscure, making it easier to employ for contemporary purposes.
Alex Stamos says that Facebook needs someone else to serve as CEO, while Zuckerberg places his focus elsewhere:
"There is a legitimate argument that he has too much power," said Stamos, who left the company in 2018, at the Collision Conference in Toronto. "He must give up part of that power. If I were him, I would hire a new CEO for the company. & # 39;
Michael J. Coren is obsessed with a social network for plants. You upload a photo of a plant around you and people around the world help you identify it. It is part of a research project that creates public-domain machine vision models and identifies new species:
I can now separate the wild radish from the more mischievous radish. Delimitation between the thimbleberry and the European dewberry. Identify a copy based on the hue of a petal or the knurling of a leaf. At a glance I can see it between three types of forget-me-nots (field, broadleaf and forest), or distinguish between an ordinary yarrow and a high mallow. This winter, after weeks of rain watering a carpet of leek and my lettuce, I collected the ingredients for a wild pesto and salad in the local parks. Pl @ ntNet is a tireless tutor that constantly adjusts and corrects my observations; it's just as exciting for me as learning a new language.
And finally …
1999: there are millions of websites that are all linked together
2019: there are four websites, each filled with screenshots of the other three.
– David Masad (@badnetworker) 28 May 2019
Talk to me
Send me tips, comments, questions, and videos where my speech is delayed to make me look drunk: firstname.lastname@example.org.