Home Tech We need a new right to repair artificial intelligence

We need a new right to repair artificial intelligence

0 comments
We need a new right to repair artificial intelligence

There is a growing trend of people and organizations to reject the unsolicited imposition of AI on their lives. In December 2023, The New York Times sued OpenAI and Microsoft for copyright infringement. In March 2024, three authors filed a class-action lawsuit in California against Nvidia for allegedly training its NeMo artificial intelligence platform on their copyrighted work. Two months later, A-list actress Scarlett Johansson sent a legal letter to OpenAI when she realized that her new ChatGPT voice was “eerily similar” to hers.

Technology is not the problem here. The power dynamic is. People understand that this technology relies on their data, often without our permission. It’s no surprise that public trust in AI is declining. A recent study of Bank investigation shows that more than half of Americans are more worried than excited about AI, a sentiment shared by most people in countries in Central and South America, Africa and the Middle East in a Global Risk Survey.

In 2025, we will see people demanding more control over how AI is used. How will that be achieved? An example is red teaming, a practice borrowed from the military and used in cybersecurity. In a red team exercise, outside experts are asked to “infiltrate” or break a system. It acts as a test of where your defenses may be failing, so you can correct them.

Red teaming is used by major AI companies to find problems in their models, but it is not yet widespread as a practice for public use. That will change in 2025.

Law firm DLA Piper, for example, now uses red teams with lawyers to directly test whether AI systems comply with legal frameworks. My nonprofit, Humane Intelligence, hosts red teaming exercises with non-technical experts, governments, and civil society organizations to test AI for discrimination and bias. In 2023, we conducted a 2,200-person red team exercise that was supported by the White House. In 2025, our red team events will draw on the lived experience of ordinary people to evaluate AI models for Islamophobia and their ability to enable online harassment against women.

Overwhelmingly, when I host one of these exercises, the most common question I get asked is how we can move from identifying problems to solving them ourselves. In other words, people want a right to repair.

An AI’s right to repair could look like this: A user could have the ability to run diagnostics on an AI, report any anomalies, and see when the company fixes them. Third-party groups, such as ethical hackers, could create patches or fixes for problems that anyone can access. Or, you can hire an independent reputable party to evaluate an AI system and customize it for you.

While this is an abstract idea today, we are laying the groundwork to make the right to repair a reality in the future. Reversing today’s dangerous power dynamic will take some work: We are quickly forced to normalize a world where AI companies simply place new, untested AI models into real-world systems, with regular people as collateral damage. . The right to repair gives each person the ability to control how AI is used in their lives. 2024 was the year the world realized the pervasiveness and impact of AI. 2025 is the year in which we demand our rights.

You may also like