Home Money Marc Andreessen once called online security teams enemies. He still wants walled gardens for children

Marc Andreessen once called online security teams enemies. He still wants walled gardens for children

0 comment
 Marc Andreessen once called online security teams enemies. He still wants walled gardens for children

In his polarizing “Techno-Optimistic Manifesto” last year, venture capitalist Marc Andreessen list a series of enemies of technological progress. These included “tech ethics” and “trust and safety,” a term used for online content moderation work, which he said had been used to subject humanity to “a massive demoralization campaign” against new technologies. like artificial intelligence.

Andreessen’s statement drew public and quiet criticism from people who work in those fields, including at Meta, where Andreessen is a board member. Critics felt her speech misrepresented her work to keep Internet services safer.

On Wednesday, Andreessen offered some clarification: When it comes to his 9-year-old son’s online life, he’s in favor of safety barriers. “I want you to be able to subscribe to Internet services and I want you to have a Disneyland-like experience,” the investor said in a conversation on stage at a conference at Stanford University’s human-centered AI research institute. “I love free Internet for everyone. Someday he will love the Internet too, but I want him to have walled gardens.”

Contrary to what his manifesto might have read, Andreessen went on to say that he welcomes technology companies, and by extension their trust and safety teams, setting and enforcing rules for the type of content allowed on their services. .

“There is a lot of freedom, company by company, to be able to decide this,” he said. “Disney enforces different codes of conduct at Disneyland than those on the streets of Orlando.” Andreessen alluded to how technology companies can face government sanctions for allowing images of child sexual abuse and other types of content, so they cannot be completely without trusted and safe teams.

So what kind of content moderation does Andreessen consider an enemy of progress? He explained that he fears that two or three companies will dominate cyberspace and “couple” with the government in a way that makes certain restrictions universal, causing what he called “potent social consequences” without specifying what they might be. “If you end up in an environment where there is widespread censorship and widespread controls, then you have a real problem,” Andreessen said.

The solution, as he described it, is to ensure competition in the tech industry and a diversity of approaches to content moderation, some with greater restrictions on speech and actions than others. “What happens on these platforms really matters,” she said. “What happens in these systems really matters. “What happens in these companies really matters.”

Andreessen did not mention company security. , he shut down Twitter’s AI ethics team, relaxed content rules, and reinstated users who had previously been permanently banned.

Those changes, along with Andreessen’s investment and manifesto, created some perception that the investor wanted few limits on free speech. His clarifying comments were part of a conversation with Fei-Fei Li, co-director of Stanford’s HAI, titled “Removing Impediments to an Innovative and Robust AI Ecosystem.”

During the session, Andreessen also repeated arguments he has made over the past year that slowing down AI development through regulations or other measures recommended by some AI safety advocates would repeat what he sees as a misguided reduction in investment in nuclear energy by the United States several decades ago. .

Nuclear power would be a “silver bullet” to many of the current concerns about carbon emissions from other electricity sources, Andreessen said. Instead, the United States retreated and climate change has not been contained as it could have been. “It’s an overwhelmingly negative, risk-averse framework,” he said. “The presumption in the discussion is that if there is potential harm, then there should be regulations, controls, limitations, pauses, stops and freezes.”

For similar reasons, Andreessen said, he wants to see greater government investment in AI infrastructure and research and give free rein to AI experimentation, for example by not restricting open source AI models in the name of security. If he wants his child to experience AI at Disneyland, some rules may also be necessary, either from governments or from trust and safety teams.

You may also like