Home Tech Why I can’t stop writing about Elon Musk

Why I can’t stop writing about Elon Musk

0 comments
Why I can't stop writing about Elon Musk

“YO “I hope I don’t have to cover Elon Musk again for a while,” I thought last week after sending TechScape to readers. Then I got a message from the news editor: “Can you keep an eye on Elon Musk’s Twitter feed this week?”

I ended up reading the world’s most powerful book addict intently, and my brain turned to liquid and leaked out of my ears:

His shortest night’s rest, on Saturday night, saw him go offline after retweeting a meme comparing London’s Metropolitan Police Force to the Nazi SS, before coming back online four-and-a-half hours later to retweet a crypto influencer complaining about jail sentences for Britons attending protests.

But I was somewhat surprised by what I found. I knew the general contours of Musk’s internet presence from having covered it for years: a triple divide between the promotion of his real businesses, Tesla and SpaceX; the enthusiastic broadcasting of low-rent nerd humor; and increasingly right-wing political agitation.

Following Musk in real time, however, revealed how his chaotic mode has been distorted by his shift to the right. His promotion of Tesla is increasingly influenced by the culture war, and the Cybertruck in particular is promoted with language that makes it seem like buying one will help defeat Democrats in November’s US presidential election. The aforementioned low-budget nerd humour is tinged with anger at the world for not thinking he’s the coolest person in the world. And right-wing political agitation is increasingly extreme.

Musk’s involvement in the UK riots appears to have pushed him further into the arms of the far right than ever before. This month, he tweeted for the first time at Lauren Southern, a far-right Canadian internet personality who is most famous in the UK for having gotten her banned from getting a visa by Theresa May’s government over her Islamophobia. More than just a tweet: he also supports her financially, sending her around £5 a month through Twitter’s subscription function. Then there was the headline-grabbing retweet from the Britain First co-leader. On its own, it could have been put down to Musk not knowing what pond he was swimming in; two weeks later, the pattern is clearer. Now, these are his people.

Well, that’s fine then.

A clear example of the difference between scientific press releases and scientific articles, today in the world of AI. The University of Bath press release:

AI does not pose an existential threat to humanity, according to a new study.

Students with a master’s degree in law have a superficial ability to follow instructions and excel in language proficiency, but they have no potential to master new skills without explicit instruction. This means that they remain inherently controllable, predictable, and confident.

This means that they remain inherently controllable, predictable and safe.

The paper, from Lu et al.:

It has been claimed that large language models, comprising billions of parameters and pre-trained on large web-scale corpora, acquire certain capabilities without having been specifically trained on them… We present a new theory explaining emergent abilities, taking into account their potential confounds, and rigorously corroborate this theory through over 1000 experiments. Our findings suggest that the purported emergent abilities are not truly emergent, but result from a combination of in-context learning, model memory, and linguistic knowledge.

Our work is a fundamental step towards explaining the performance of language models, as it provides a model for their efficient use and clarifies the paradox of their ability to excel in some cases and fail in others. In this way, we demonstrate that their capabilities should not be overestimated.

The press release version of this story has gone viral, for predictable reasons: everyone likes to see the Silicon Valley titans defeated, and the existential risk of AI has become a divisive topic in recent years.

But the article is far from the claim that the university press office wants to make about him, which is a shame, because what the article says does The program is interesting and important nonetheless. There is a lot of emphasis on so-called “emergent” capabilities with frontier models: tasks and capabilities that did not exist in the training data but which the AI ​​system demonstrates in practice.

These emerging capabilities are worrying for those who worry about existential risk, because they suggest that AI safety is harder to ensure than we might like. If an AI can do something it hasn’t been trained to do, then there’s no easy way to ensure that a future AI system is safe: you can leave things out of the training data, but it might figure out how to do them anyway.

The article demonstrates that, at least in some situations, those emergent skills are nothing of the sort. Instead, they are the result of what happens when you take an LLM like GPT and shape it into a chatbot, before asking it to solve problems in the form of a question-and-answer conversation. That process, the article suggests, means that the chatbot can never be given “zero-possibility” questions, for which it has no prior training: the art of prompting ChatGPT is inherently that of teaching it a bit about what form the answer should take.

It’s an interesting finding. It doesn’t prove that the AI ​​apocalypse is impossible, but if you want good news, it suggests that it’s unlikely to happen tomorrow.

Training pains

Nvidia accused of “unjust enrichment” Photo: Dado Ruvic/Reuters

Nvidia used YouTube to train its artificial intelligence systems. Now that’s coming back to bite.:

Skip newsletter promotion

A federal lawsuit alleges that Nvidia, which focuses on designing chips for artificial intelligence, took YouTube creator David Millette’s videos for its AI training work. The suit accuses Nvidia of “unjust enrichment and unfair competition” and seeks to be granted class-action status to include other YouTube content creators with similar claims.

Nvidia illegally “scraped” YouTube videos to train its Cosmos AI software, according to the lawsuit, filed Wednesday in the Northern District of California. Nvidia used software on commercial servers to evade YouTube detection and download “approximately 80 years of video content per day,” the lawsuit says, citing an Aug. 5 filing. Press release 404.

This lawsuit is unusual in the AI ​​world, if only because Nvidia has been a bit tight-lipped about the sources of its training data. Most AI companies that have faced lawsuits have been proudly open about their disregard for copyright limitations. Take Stable Diffusion, which sourced its training data from the open-source LAION dataset. Good:

(Judge) Orrick found The artists had argued reasonably that the companies are violating their rights by illegally storing their works and that Stable Diffusion, the AI ​​image generator in question, may have been built “largely on top of copyrighted works” and was “created to facilitate that infringement by design.”

Of course, not all AI companies play on a level playing field. Google has a unique advantage: everyone gives it permission to train its AI on their material. Why? Because otherwise, you will be kicked out of the search altogether.:

Many site owners say they can’t afford to block Google’s artificial intelligence from summarizing their content.

That’s because the Google tool that scans web content for AI answers is the same one that crawls web pages for search results, publishers say. Blocking Alphabet Inc.’s Google in the same way sites have blocked some of its AI competitors would also hamper a site’s ability to be discovered online.

Ask me anything

What was I thinking? Ask me this and any other tech-related question.

One more, self-indulgent note. After 11 years, I’m leaving the Guardian at the end of this month, and 2 September will be my last TechScape. I’ll be answering reader questions, big and small, as I say goodbye, so if there’s anything you’ve ever wanted an answer to, from tech recommendations to industry gossip, then hit reply and send me an email.

The broader TechScape

TikTok is boring you. Photograph: Jag Images/Getty Images/Image source

You may also like