Advertisements
<pre><pre>Twitter writes new rules if it can only enforce existing ones
Advertisements

In August, when Alex Jones was banned from the platform, Twitter invited the New York Times to participate in a meeting about why it took so long. It would later appear that Jones had already broken the rules of the company at least seven times, but CEO Jack Dorsey still hesitated to pull the trigger. By the end of the meeting, Dorsey had instructed his subordinates to create a new policy that & # 39; dehumanizing speech & # 39; forbidden.

The underlying members spent the following year trying to figure out what that meant.

A major draft policy was published in September. Today the company has unveiled the end product: an update of his rules about hateful behavior the narrow prohibition of speech that dehumanizes others on the basis of religion. It is no longer kosher to call people maggots, or vermin or viruses, to keep kosher. Any existing tweet that breaks the rule must be deleted if it is reported – which Louis Farrakhan has already tripped – and tweeting dehumanizing anti-religious sentiment in the future could lead to suspension of the account or even outright prohibitions.

All this was a somewhat unexpected outcome: the original Times the story had not even mentioned religion. In a new piece, the Times& # 39; Kate Conger says that Twitter finally decided that religion was the easiest place to start in implementing the policy:

"Although we started with religion, our intention has always been an extension for all protected categories," said Jerrel Peterson, Twitter Security Policy Officer, in an interview. "We just want to be methodical."

Reducing Twitter's efforts to define the humanizing language illustrates the company's challenges while sorting through what it allows on its platform. While the new guidelines help to draw clearer lines around what it will and will not tolerate, it took nearly a year for Twitter to compile the rules – and even then they are just a fraction of the policy it originally said it would wanted to.

Advertisements

That is all fine as far as it goes, and yet you can still read and think it – really? Did Twitter forbid on a Tuesday in 2019 to say "Jews are vermin"? Even for a company that is notorious for a geological pace, today's update is outdated.

It also feels unnecessary.

Read the Twitter rules and you will see that behaviors that are already banned also incite & # 39; fear a protected category & # 39; , with the example & # 39; everything (religious group) being terrorists. & # 39; It also prohibited & # 39; hateful images & # 39; including swastika & # 39; s. And yet, as most Twitter users will tell you, cruel anti-Semites and open Nazis appear too often in the timeline – to the point that Jack Dorsey spent much of his winter podcast tour answer questions about the sustainable presence of the Nazis on the service. (Twitter says the most important change here is that rules previously only applied to tweets targeted at individuals, so you could "Protestants are scum" but not "Casey & # 39; s Protestant scum.")

New policies will always be needed to take into account the ever-evolving nature of human speech and shifting cultural norms. But they will never be enough to make users feel safe. It is much more important that the policy is actually applied.

The Times story contains comments from Twitter about how it trains its power of content moderators to apply the new rules. And the company has started reporting high-level data about enforcement activities, which gives us an idea of ​​the extent of the problem that Twitter faces.

Advertisements

The most recent report shows that Twitter users reported 11 million unique accounts between July and December 2018, an increase of 19 percent over the previous reporting period. And yet Twitter took action against just 250,806 accounts – which was down 4 percent from the previous period.

The data does not become more detailed than that, so it is impossible to assess the effectiveness of Twitter moderation from the report. But the numbers suggest that the frustration of Twitter users about the product significantly exceeds the willingness or ability of the moderators to do something. Viewed in this way, Twitter has no problem with writing policies – it has a problem responding.

Democracy

The true origin of the Seth Rich conspiracy theory

Referring to a former US assistant officer who reported, Michael Isikoff reported that Russian trolls had caused the hoax that former DNC staff member Seth Rich was murdered to disguise corruption in the Hillary Clinton campaign. Fox News loved the story and the consequences for Rich's family were cruel.

In the summer of 2016, Russian secret agents secretly planted a fake report, claiming that the Democratic National Commission staff member, Seth Rich, was shot by a group of assassins working for Hillary Clinton, creating a notorious conspiracy theory that captivated conservative activists and was later promoted from inside the white house of President Trump, an investigation of the Yahoo news has been found.

Russia's foreign intelligence service, known as the SVR, first circulated a fake "bulletin" – disguised as a real intelligence report – about the alleged murder of the former DNC staff member on July 13, 2016, according to the US federal prosecutor responsible for the Rich case. That was just three days after Rich, 27, was murdered in what the police thought was a failed robbery while walking home to his group home in the Bloomingdale neighborhood of Washington, D.C., about 30 blocks north of Capitol.

President Trump cannot block his critics on Twitter, rules of the court of appeal

Colin Lecher reports:

President Trump violated the First Amendment by blocking his critics on Twitter, a federal court of appeal ruled today and closed the White House's request to overturn the decision of a lower court.

Advertisements

Amazon Workers Plan Prime Day Strike despite Loon Promise

In the latter case of an employee uprising at a major technology company, Amazon warehouse workers are planning to protest next week against their low wages on Prime Day. Josh Eidelson and Spencer Soper report:

Employees in a Shakopee, Minnesota, fulfillment center are planning a six-hour work break on July 15, the first day of Prime Day. Amazon started the event five years ago, using large discounts on televisions, toys, and clothing to attract and retain Prime members who pay subscription fees in exchange for free shipping and other extras.

"Amazon is going to tell a story about themselves, namely that they can send a Kindle to your house in one day, not great," said William Stolz, one of the Shakopee employees who organized the strike. "We want to take the opportunity to talk about what is needed to make that work happen and put pressure on Amazon to protect us and provide safe, reliable jobs."

FTC Said to Ask about Disabling YouTube ads for children's privacy

Ben Brody reports that the president of the Federal Trade Commission asks if YouTube is possible to disable children's ads:

During a call on 1 July, Chairman Joseph Simons and fellow Republican Commissioner Noah Phillips suggested that & # 39; the world's largest video site does not have to move the content of all children to a separate platform as advocates have suggested, according to the person. Instead, individual channels may disable advertising to bring the site in line with the prohibition of US law on the collection of information about children under the age of 13 without parental consent.

The FTC is to investigate YouTube from Google for possible violations of the Privacy Protection Act for Children. The heads of two children's groups who had previously filed complaints against the site took part in the conversation, the person said.

While cameras follow the residents of Detroit, there is a debate about racial prejudice

Advertisements

Interesting note in Amy Harmon's & # 39; s article on Detroit's current debate on face recognition and surveillance: no one even knows why face recognition algorithms are racist. ("Facial recognition software marketed by Amazon, 31% of the time wrongly brings women with dark skin as men.")

It is not clear why facial recognition algorithms perform differently on different racial groups, researchers say. One reason may be that the algorithms, which learn to recognize patterns in faces by viewing large numbers of them, are not trained in a diverse series of photos.

But Kevin Bowyer, a computer scientist at Notre Dame, said this was not the case for a study he recently published. It is also not certain that the skin color is the culprit: face structure, hair styles and other factors can contribute.

Exclusive: the Harvard professor behind the Facebook supervisory board defends his role

Mark Sullivan interviews Noah Feldman, who helped to come up with the idea of ​​Facebook's upcoming independent supervisory board for moderating content.

SULLIVAN: For example, think of the public debate about whether Mark was right when he hinted that he thought the Holocaust denial should not be taken away as hate speech. And then a lot of people were angry and said, "How dare you say that?" The whole point, the point that Mark gets, is that Mark shouldn't decide that! It shouldn't be up to Mark. That is a really hard decision to balance and it will be made by this board in the future. That is a good example of the kind of hard content decision of what the boundaries are of hate speech. That is sort of an architectural situation.

Then there will also be situations where Facebook might have set a community standard that doesn't really match its own values. And in those cases, I would imagine that the sign on Facebook would say, "Listen, your community standard is wrong; it doesn't match the values ​​you've formulated, so you have to change it."

Why China & # 39; s social credit systems are surprisingly popular

Adam Minter investigates why there has been no popular uprising against social credit systems in China:

It is chilling, dystopian – and probably quite popular. Chinese people have already embraced a whole range of private and government systems that collect, aggregate and distribute data about digital and offline behavior. Outside of China portrayed as a creepy digital panoptic, this network of so-called social credit systems within China is seen as a means to generate something that the country is desperately missing: trust. For this, constant monitoring and the loss of privacy are a low price.

As in many developing countries, the fact that China's economic growth exceeds its ability to create institutions and police organizations that promote trust between citizens and businesses. For example, a decade after Chinese milk producers were revealed as counterfeit infant formula, Chinese parents still shun the country's dairy industry and the distrust of food producers remains almost universal. Meanwhile, China remains the fake capital of the world. Some of the most recognizable companies – including Alibaba Group Holding Ltd., Tencent Holdings Ltd. and Pinduoduo Inc. – are known as flourishing markets for counterfeiting, thereby undermining the credibility of Chinese e-commerce in general.

Elsewhere

Advertisements

Facebook diversity: looks like double female employees in 2024

Facebook sets more difficult goals around diversity, Kurt Wagner reports:

"We imagine a company in which in five years at least fifty percent of our workforce will be women, people who are black, Latin American, Indian, Pacific Island, people with two or more ethnic groups, people with disabilities and veterans "Maxine Williams, Facebook & # 39; s chief diversity officer, wrote in a blog post on Tuesday.

Facebook has released the new goals in addition to the annual ones diversity report, which describes the breakdown by ethnicity and gender of its staff. Williams said that reaching 50% of under-represented employees in the US has both a & # 39; stretch & # 39; as an & # 39; ambitious & # 39; problem. About 43% of US Facebook employees are currently from under-represented groups.

Facebook & # 39; s ex-security chief about disinformation campaigns: & # 39; The sexiest explanation is usually not true & # 39;

Alex Stamos talks to Victoria Kwan about the disinformation landscape:

STAMOS: At a certain point you start to realize that they are mainly scammers. This is the truth on the internet: there are tens of thousands of people whose job it is to spread spam on Facebook. It is their career. There are hundreds of times more people doing that than working in professional disinformation campaigns for governments. So they have to accept basically that the sexiest explanation is usually not true.

This is something that companies also go through. They hire new analysts, and they jump to wild conclusions. & # 39; I have found a Chinese IP, maybe it is MSS (Ministry of State Security). & # 39; It is probably not MSS; they are probably unpatched Windows bugs in China. This is also why you get the red teamingand why you have disinterested parties whose job it is to question the conclusions.

Mark Zuckerberg & # 39; s security chief Liam Booth leaves after allegations of misconduct

Advertisements

"Mark Zuckerberg's family office says there was no evidence to substantiate allegations of misconduct against Liam Booth, but he leaves anyway," reports Rob Price.

Exclusive research: sex, drugs, female hatred and sleaze at the owner of the HQ Of Bumble

Angel Au-Yeung & # 39; s in-depth look at the company behind Bumble finds "a headquarters where more than a dozen former employees claim it is toxic, especially to women."

"When I served as the company's CMO, I was told that I had to make a significant effort for investors and applicants & # 39; horny & # 39; to work for Badoo," said Jessica Powell, 2011 marketing director of Badoo up to 2012 in an email. "I was once asked to give a designer candidate a massage." She says she refused and added that "female employees were routinely discussed in terms of their appearance."

"When female staff spoke, their concerns were ignored or minimized," she adds, rejecting a "woman-unfriendly atmosphere."

GitHub has removed Open Source versions of DeepNude

GitHub will no longer host new versions of an app that has made women's synthetic porn, Joseph Cox reports:

"We are not proactively monitoring user-generated content, but we are actively investigating abuse reports. In this case, we disabled the project because we felt it was against our acceptable use policy," a GitHub spokesperson told Motherboard in a statement. "We do not approve of using GitHub to post sexually obscene content and prohibit such behavior in our Terms of Service and Community Guidelines."

Advertisements

When The Times First Says It, This Twitter Bot Tracks It – The New York Times

We tend to talk a lot about malicious bots here, so I appreciated Alexandria Symonds' charming profile of this extremely good and funny Twitter bot, apparently managed by a 24-year-old Googler:

The bot is a computer program that scrapes the Times website every hour for new articles and compares it to a memory bank with words that the paper has used before. The bot then tweets the words that appear to be new. On a typical day, it places a handful of tweets, consisting of neologisms, scientific terms, words in foreign languages ​​and the occasional typo.

On June 28 the tweets read: zendale, zombiecorn, biofocals, parasexualized, dobok, doors & # 39; ll, gaytriarchy. (The latter was by far the most popular.)

launches

Facebook says it will launch experimental apps under the NPE Team name

Three and a half years later closing his similar Creative Labs division, Facebook seems to be relaunching it with a worse name, reports Chaim Gartenberg:

Facebook launches a new brand of experimental apps for consumers, developed under the "NPE Team, from Facebook" label. (NPE stands for "new product experiments.") The team will develop new apps for iOS, Android and the web, with a specific focus on consumer services that similar to the Microsoft Garage group.

In a blog post announcing the new team, the company said it "decided to use this separate brand name to set the right expectations for users that NPE Team apps will change very quickly and close when we discover that they are not useful for people. "

Facebook is trying to tempt makers with more options for generating revenue

Advertisements

Facebook is going to start with a share of the maker's income. But it also has some new goodies to hand out:

Before VidCon, Facebook has announced a host of revenue-generating options for its creators, including more paid groups, ad placement options, and star-packed packages that viewers can purchase and send as tips during live streams. Facebook is trying to lure video makers away from competitors such as YouTube and Patreon with revenue-generating features such as Fan Subscriptions, a $ 4.99 digital tipping pot that offers fans exclusive content, which opened to more video makers earlier this year. The features announced today are designed to give creatives more opportunities to make money with the platform and customize the experience of fans when they visit their Facebook page.

YouTube makes it easier for video makers to handle copyright claims

Jake is participating in a creative-friendly promotion YouTube announces VidCon this week:

Owners of copyrighted content (such as a record label or a film studio) must now indicate exactly where their copyrighted material appears in their video, which they did not have to do in the past when manually reporting a violation. In this way, video makers can easily verify whether a claim is legitimate and then edit the content if they do not want to respond to the consequences, such as loss of income or having the video removed.

Until now, copyright owners did not have to say where infringing content appeared when claiming manually. That has been the source of much frustration for video makers, who would search through long video & # 39; s to determine exactly which part was even at stake. The lack of detail made it difficult to dispute the allegations, and it meant that if a creator attempted to edit potentially infringing content, they would have to wait and see if the copyright owner agreed that the problem had been resolved before the claim would be let go.

How to run a small social networking site for your friends

Darius Kazemi has a nice new project where he teaches you how to create and host your own social network. Let me know if any of you try this!

takes

Advertisements

If you are going to brag about trolling, you have to be the least good at it

Brian Feldman takes it off this absurd complaint Vice – "a countercultural publication that does spun for Bank of America "- from a man whose Twitter account was banned after sending death threats to Mr. Peanut's brand.

This is not a good joke: immediately tweet that you will throw a bullet into someone's brain. That is only funny if you think that contextless threats of violence that are indistinguishable from real online intimidation are funny. Some people do, and if that's the case for you, I want to wish you every success by attending high school in the fall and urging you not to postpone your summer vacation assignment until the last minute.

And finally …

Juggalo makeup blocks facial recognition technology

With surveillance systems for face recognition deployed throughout America, citizens are rightly concerned and looking for solutions. Fortunately, you can fool many systems by joining a notorious music / struggle / Faygo fandom known as the Insane Clown Posse. Ming Lee Newcomb reports:

It appears that makeup from Juggalos cannot be accurately read by many face recognition technologies. The most common programs identify contrast areas – such as those around the eyes, nose and chin – and then compare those points with images in a database. The black bands that are often used in Juggalo makeup cover up the mouth and cover the chin, completely redefining a person's key features.

As Twitter uses @tahkion points (via Yahoo!), the black-on-white face paint tears most face recognition in misreading someone's jaw line and, presumably, eye contour.

You will look as good as a clown!

Talk to me

Advertisements

Send me tips, comments, questions and pictures of you as a Juggalo: casey@theverge.com.