HomeTech The people charged with ensuring that AI does not destroy humanity have left the building.

The people charged with ensuring that AI does not destroy humanity have left the building.

0 comment
The people charged with ensuring that AI does not destroy humanity have left the building.

everything happens a lot. I’m in Seoul for the International AI Summit, the half-year follow-up to last year’s Bletchley Park AI Safety Summit (the full sequel will be in Paris this fall). As you read this, the first day of events will have barely concluded; although, to keep the fuss down this time, it was simply a “virtual” leadership meeting.

When the date was set for this summit (alarmingly late for, say, a journalist with two preschoolers for whom four days away from home is a juggling act), it was clear there would be a lot to cover. The hot summer of AI is here:

The inaugural AI security The summit held at Bletchley Park in the UK last year announced an international testing framework for AI models, following calls. …for a six-month pause in the development of powerful systems.

There hasn’t been any pause. The Bletchley statement, signed by the UK, US, EU, China and others, praised the “huge global opportunities” of AI but also warned of its potential to cause “catastrophic” harm. It also secured commitments from big tech companies, including OpenAI, Google and Mark Zuckerberg’s Meta, to cooperate with governments to test their models before they are released.

While the UK and US have established national AI safety institutes, AI development in industry has continued. …OpenAI released GPT-4o (the o stands for “omni”) for free online; A day later, Google previewed a new AI assistant called Project Astra, as well as updates to its Gemini model. Last month, Meta released new versions of its own AI model, Llama. And in March, AI startup Anthropic, formed by former OpenAI employees who disagreed with Altman’s approach updated his Claude model..

Then, the weekend before the summit started, it all started at OpenAI as well. Most striking, perhaps, is that the company found itself in a dispute with Scarlett Johansson over one of the voice options available in the new version of ChatGPT. After approaching the actor to voice her new assistant, an offer she twice turned down, OpenAI launched ChatGPT-4o with “Sky” talking about her new capabilities. Her similarity to Johansson was immediately obvious to everyone, even before CEO Sam Altman tweeted “her” after the presentation (the name of the Spike Jonze film in which Johansson voiced a super-intelligent AI). Despite denying the similarity, the Sky voice option was removed.

Most importantly, however, the two men who run the company/nonprofit/secret organization villain The organization’s “super alignment” team, which was dedicated to ensuring that its efforts to build superintelligence do not wipe out humanity, has resigned. The first to go was Ilya Sutskever, co-founder of the organization and leader of the coup that temporarily and ineffectively overthrew Altman. His departure attracted attention, but was not unexpected. If you come towards the king, you better not miss. Then on Friday, Jan Leike, Sutskever’s co-head of super lineup, also left and had a lot more to say:

A former senior OpenAI employee said the company behind ChatGPT is prioritizing “brilliant products” over security, and revealed he resigned after a disagreement over key goals reached a “breaking point.”

Leike detailed the reasons for his departure in an X thread posted on Friday, in which he said safety culture had become a lower priority. “In recent years, safety culture and processes have taken a backseat to shiny products,” he wrote.

“These problems are quite difficult to solve and I am concerned that we are not on the trajectory to get there,” he wrote, adding that it was becoming “more difficult” for his team to conduct their investigation.

“Building machines smarter than humans is an inherently dangerous task. “OpenAI takes on an enormous responsibility on behalf of all humanity,” Leike wrote, adding that OpenAI “must become a security-first AGI (artificial general intelligence) company.”

Leike’s resignation note was a rare display of dissent in the group, which has previously been portrayed as almost single-minded in pursuing its goals (which sometimes mean Sam Altman’s). When the charismatic CEO was fired, it was reported that almost all staff had accepted offers from Microsoft to follow him to a new AI laboratory created under the House of Gates, which also has the largest external stake in the corporate subsidiary of OpenAI. Even when several staff members quit to form Anthropic, a rival artificial intelligence company that distinguishes itself by talking about how much it focuses on security, the amount of nonsense was kept to a minimum.

Turns out (surprise!) that’s not because everyone loves each other and has nothing bad to say. By Kelsey Piper at Vox:

I have seen the extremely restrictive discharge agreement which contains non-disclosure and non-disparagement provisions to which former OpenAI employees are subject. Forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.

If an outgoing employee refuses to sign the document, or violates it, they may lose all of the equity they gained during their time at the company, which is likely worth millions of dollars. A former employee, Daniel Kokotajlowho posted that he left OpenAI “due to loss of confidence that it would behave responsibly in the time of AGI,” has publicly confirmed that he had to hand over what would likely have turned out to be a huge sum of money in order to resign without signing the document.

Just a day later, Altman said the clawback provisions “should never have been something we had in any document.” And he added: “We have never recovered anyone’s acquired assets, nor will we if people do not sign a separation agreement. this is on me and one of the few times I’ve been truly embarrassed running openai; “I didn’t know this was happening and I should have.” (Own capitalization model.)

Altman did not address the broader allegations, of a strict and broad NDA; and, although he promised to fix the clawback provision, nothing was said about the other carrot-and-stick incentives offered to employees to sign exit papers.

As for the costumes, it is perfect. Altman has been a leading advocate for state and interstate regulation of AI. Now we see why it might be necessary. If OpenAI, one of the largest and most well-resourced AI labs in the world, which claims security is at the root of everything it does, can’t even keep its own team together, then what hope is there for the rest of the world? industry?

skip past newsletter promotion

Careless

‘Shrimp Jesus’ is an example of extravagant AI-generated art being shared on Facebook

It’s fun to see an artistic term develop before your eyes. The mail had spam; the email had spam; the world of AI has fallen:

“Slop” is what you get when material generated by artificial intelligence is placed on the web for anyone to see.

Unlike a chatbot, the chatbot is not interactive and rarely aims to answer readers’ questions or satisfy their needs.

But like spam, its overall effect is negative: the time and effort lost by users who now have to wade through the junk to find the content they’re really looking for far outweighs the profits for the junk’s creator.

I’m interested in helping popularize the term, for the same reasons as Simon Willison, the developer who brought its emergence to my attention: it’s crucial to have simple ways of talking about bad AI, to preserve the ability to recognize that AI it can be done well.

The existence of spam implies emails that you want to receive; the existence of waste implies the desired AI content. For me, that’s content that I’ve generated myself, or at least that I hope is generated by AI. No one cares about the dream you had last night and no one cares about the response you got from ChatGPT. Keep it to yourself.

The Broadest TechScape

Ed Dwight exits the NS-25 Mission crew capsule. Photograph: Blue Origin/AFP/Getty Images

You may also like