Home Tech The hype about generative AI seems unavoidable: it must be addressed with education

The hype about generative AI seems unavoidable: it must be addressed with education

0 comments
The hype about generative AI seems unavoidable: it must be addressed with education

Arvind Narayanan, a A professor of computer science at Princeton University, he is best known for blowing the whistle on the hype around artificial intelligence in his Substack project, AI Snake Oilwritten with PhD candidate Sayash Kapoor. The two authors recently published a book based on their popular newsletter on AI shortcomings.

But make no mistake: they are not against the use of new technologies. “It’s easy to misinterpret our message and think that all AI is harmful or dubious,” says Narayanan. During a conversation with WIRED, he makes it clear that his criticism is not directed at the software itself, but at the culprits who continue to spread misleading claims about artificial intelligence.

In AI Snake OilThose responsible for perpetuating the current hype cycle fall into three main groups: companies selling AI, researchers studying AI, and journalists covering AI.

The hype superspreaders

Companies that claim to predict the future using algorithms position themselves as the most potentially fraudulent. “When predictive AI systems are deployed, the first people they harm are often minorities and people already living in poverty,” Narayanan and Kapoor write in the book. For example, an algorithm previously used in the Netherlands by a local government to predict who might commit welfare fraud wrongly targeted women and immigrants who did not speak Dutch.

The authors are also skeptical of companies that focus primarily on existential risks, such as artificial general intelligence (AGI) — the concept of a super-powerful algorithm that’s better than humans at performing tasks. They don’t scoff at the idea of ​​AGI, though. “When I decided to become a computer scientist, the ability to contribute to AGI was an important part of my own identity and motivation,” Narayanan says. The lack of alignment stems from companies prioritizing long-term risk factors over the impact AI tools have on people right now — a common refrain I’ve heard from researchers.

According to the authors, much of the hype and misunderstanding can also be attributed to poor, unreproducible research. “We found that in a large number of fields, the question of data leakage leads to overly optimistic claims about how well AI works,” says Kapoor. Data leak It is essentially when you test AI using part of the model’s training data, similar to giving students answers before taking a test.

While academics are portrayed in AI Snake Oil According to Princeton researchers, journalists who make “classic mistakes” are more motivated by bad intentions and deliberately get things wrong: “Many articles are simply reworded press releases repackaged and whitewashed as news.” Journalists who avoid honest reporting in favor of maintaining their relationships with big tech companies and protecting their access to corporate executives are considered especially toxic.

I think the criticisms about access journalism are fair. In retrospect, I could have asked tougher or smarter questions during some interviews with stakeholders at major AI companies, but the authors might be oversimplifying the matter. The fact that big AI companies let me in doesn’t stop me from writing skeptical articles about their technology or working on research papers that I know will make them angry. (Yes, even if they make commercial deals, as OpenAI did, with WIRED’s parent company.)

And sensational news stories can be misleading about AI’s true capabilities. Narayanan and Kapoor highlight the 2023 transcript of New York Times columnist Kevin Roose’s chatbot interacting with the Microsoft tool titled “Bing AI chat: “I want to be alive. 😈” as an example of journalists sowing public confusion about sentient algorithms. “Roose was one of the people who wrote these articles,” Kapoor says. “But I think when you see headline after headline talking about chatbots that want to come to life, it can have a huge impact on the public psyche.” Kapoor mentions the Chatbot ELIZA of the 1960s, whose users quickly anthropomorphized a rudimentary AI tool, as a prime example of the enduring impulse to project human qualities onto mere algorithms.

You may also like