Irwin said Musk encouraged the team to worry less about how their actions would affect users’ growth or revenue, and said security was the company’s number one priority. “He stresses that every day, several times a day,” she said.
The approach to safety Irwin described at least in part reflects an acceleration of changes already planned since last year around Twitter’s handling of hate speech and other policy violations, according to former employees familiar with that work.
One approach, enshrined in the industry mantra of “freedom of speech, not freedom of speech,” involves omitting certain tweets that violate company policies, but not appearing on places like the home timeline and search.
Twitter has long deployed such “visibility filtering” tools around disinformation and had included them in its official hate speech policy before the Musk acquisition. The approach allows for more free speech while reducing the potential harm associated with viral abusive content.
The number of tweets containing hateful content on Twitter rose sharply in the week before Musk tweeted on Nov. 23 that impressions or opinions of hate speech were declining, according to the Center for Countering Digital Hate — in one example of researchers pointing to the prevalence of such content, while Musk praises a reduction in visibility.
Tweets containing words that were anti-black that week were three times higher than in the month before Musk took over, while tweets containing a homosexual slur were up 31 percent, the researchers said.
‘More risks are moving fast’
Irwin, who joined the company in June and previously held security roles at other companies including Amazon.com and Google, hit back at the suggestion that Twitter lacked the resources or willingness to protect the platform.
She said layoffs did not significantly affect full-time employees or contractors working in what the company called its “Health” departments, including “critical areas” such as child safety and content moderation.
Two sources familiar with the cuts said more than 50 percent of the health department was laid off. Irwin did not immediately respond to a request for comment on the allegation, but rather denied that the health team was seriously affected by layoffs.
She added that the number of people working on child safety has not changed since the acquisition and the product manager for the team is still there. Irwin said Twitter filled some positions for people who left the company, though she declined to give specific numbers on the size of the revenue.
She said Musk was more focused on using automation, arguing that the company had made the mistake of using time- and labor-intensive human reviews of harmful content in the past.
“He has encouraged the team to take more risks, act quickly and secure the platform,” she said.
For example, on child safety, Irwin said Twitter had shifted to automatically removing tweets reported by trusted figures with a track record of accurately flagging malicious posts.
Carolina Christofoletti, a threat intelligence researcher at TRM Labs who specializes in child sexual abuse material, said she noticed that Twitter recently removed content within 30 seconds of reporting it, without the receipt of her report or acknowledgment of to confirm its decision.
In the interview on Thursday, Irwin said Twitter has removed about 44,000 accounts involved in child safety violations, working with cybersecurity group Ghost Data.
Twitter also restricts hashtags and search results often associated with abuse, such as those aimed at looking up “teen” pornography. Previous concerns about the impact of such restrictions on allowed use of the terms have dissipated, she said.
Using “trusted reporters” was “something we’ve discussed on Twitter in the past, but there was some hesitation and frankly just some delay,” Irwin said.
“I think we now have the opportunity to really move forward with that kind of thing,” she said.