15.4 C
Tuesday, June 6, 2023
HomePoliticsOrganizing AI: 3 experts explain why it's hard to do and important...

Organizing AI: 3 experts explain why it’s hard to do and important to do right


from Fake photos Donald trump arrested by new york city police officers on chatbot describing a The computer world is so alive that it died so tragicallyThe ability of the new generation of generative AI systems to generate compelling yet imaginative text and images triggers alarms about fraud and misinformation on doping. In fact, a group of AI researchers and industry figures urged the industry on March 29, 2023, to pause additional training on the latest AI technologies or, barring that, for governments to “impose a ban.”

These technologies – image generators like DALL-EAnd Medjourney And stable spreadtext generators such as coldAnd chatAnd chinchilla And LLaMA – Now available to millions of people and requires no technical knowledge to use.

Given the potential for widespread harm when tech companies roll out and test these AI systems to the public, policymakers are faced with the task of determining if and how the emerging technology should be regulated. The conversation asked three technology policy experts to explain why regulating AI is such a challenge — and why it’s so important to get it right.

To quickly jump into each answer, here’s a list of each:

Human shortcomings and a moving target
Combine “soft” and “hard” approaches
Four main questions to ask

Human shortcomings and a moving target

S. Shyam Sundar, Professor of Media Effects and Director, Center for Socially Responsible Artificial Intelligence, Penn State

The reason for organizing artificial intelligence is not because the technology is out of control, but because human imagination is out of proportion. It fueled the streaming media coverage Irrational beliefs about the abilities and consciousness of artificial intelligence. These beliefs are based onAutomation biasOr the tendency to let your guard down when machines perform a task. Ex Lack of alertness among the pilots When their planes are flying on autopilot.

Numerous studies in my lab have shown that when a device, rather than a human, is identified as the source of interaction, it triggers a mental shortcut in users’ minds that we call “device indicative. This acronym is the belief that machines are accurate, objective, unbiased, infallible, etc. It clouds the user’s judgment and causes the user to trust machines excessively. However, simply turning people away from AI fallibility is not enough, because it is known that Humans unconsciously assume competence even when technology does not guarantee it.

Research has shown that People treat computers as social beings When machines show the slightest hint of humanity, such as using conversational language. In these cases, people apply the social rules of human interaction, such as politeness and reciprocity. Therefore, when computers seem sensitive, people tend to trust them blindly. Regulation is needed to ensure that AI products deserve this trust and do not exploit it.

AI poses a unique challenge because, unlike traditional engineering systems, designers cannot be sure how AI systems will behave. When a conventional car was shipped out of the factory, the engineers knew exactly how it would operate. But with self-driving cars, engineers How he will perform in new situations can never be certain.

Recently, thousands of people around the world have been marveling at what large AI models like GPT-4 and DALL-E 2 are producing in response to their claims. None of the engineers involved in developing these AI models can tell you exactly what the models will produce. To complicate matters, these models change and evolve with more and more interaction.

All this means that there is a high potential for imbalances. Therefore, a lot depends on how AI systems are deployed and what recourse provisions are in place when human sensitivities or well-being are hurt. Artificial intelligence is more than just infrastructure, like a highway. You can design them to collectively shape human behaviors, but you will need mechanisms to deal with violations, such as speeding, and unexpected events, such as accidents.

AI developers will also need to be extraordinarily creative in envisioning ways a system might behave and trying to anticipate potential breaches of social norms and responsibilities. This means that there is a need for regulatory or governance frameworks that rely on periodic audits and monitoring of AI results and products, although I think these frameworks must also recognize that system designers cannot always be held responsible for mishaps.

AI researcher Joanna Bryson describes how professional organizations can play a role in regulating AI.

Combine “soft” and “hard” approaches

Casson Schmidt, Assistant Professor of Public Health, Texas A&M University

Regulating AI is challenging. To organize AI well, you must first define AI and understand the expected risks and benefits of AI. The legal definition of AI is important for determining what is subject to law. But AI technologies are still evolving, so it’s hard to pin down a stable legal definition.

It is also important to understand the risks and benefits of AI. Good regulations should maximize public goods while Reduce risk. However, applications of AI are still emerging, so it is difficult to know or predict future risks or benefits. These kinds of unknowns make emerging technologies such as artificial intelligence very interesting Difficult to regulate With traditional laws and regulations.

legislators He is often very slow to adapt to the rapidly changing technological environment. some New laws Outdated by the time of its age or so foot. Without new laws, the regulators You must use the old laws to address new problems. Sometimes this leads to legal barriers to Social benefits or Legal loopholes to harmful behaviour.

Soft lawsis the alternative to traditional “hard law” approaches to legislation aimed at preventing specific violations. In the soft law approach, a private organization identifies rules or standards to industry members. These can change more quickly than traditional laws. This makes Promising soft laws to emerging technologies because they can quickly adapt to new applications and risks. but, Inflexible laws can mean soft enforcement.

Megan DoerrAnd Jennifer Wagner And I Suggest a third method: Copyleft AI with Trusted Enforcement (CAITE). This approach combines two completely different concepts of intellectual property – copyleft licensing and patent trolls.

A copyleft license allows content to be easily used, reused, or modified under the terms of the license—for example, open source software. The CAITE model uses copyleft licenses to require AI users to follow specific ethical guidelines, such as transparent assessments of the impact of bias.

In our model, these licenses also transfer the legal right to enforce license violations to a trusted third party. This creates an enforcement entity that exists solely to enforce ethical standards for AI and can be funded in part by fines for unethical behavior. This entity is like a patent troll in that it is private rather than governmental and supports itself by enforcing legal intellectual property rights that it collects from others. In this case, rather than enforcement for the sake of profit, the entity enforces the ethical guidelines specified in the licenses – “troll for good”.

This model is flexible and adaptable to meet the needs of a changing AI environment. It also allows for as great enforcement options as a traditional government regulator. In this way, it combines the best elements of hard and soft law approaches to address the unique challenges of AI.

Although generative AI has grabbed the headlines recently, other types of AI have been posing challenges for regulators for years, particularly in the area of ​​data privacy.

Four main questions to ask

John Villasenor is Professor of Electrical Engineering, Law, Public Policy, and Management at the University of California, Los Angeles

the Unusual recent developments Generative big language based on the AI ​​model is spurring calls for the creation of a new organization specific to AI. Here are four key questions to ask as this dialogue progresses:

1) Is new regulation of AI necessary? Many potentially problematic outcomes from AI systems have already been addressed by existing frameworks. If the artificial intelligence algorithm a bank uses to evaluate loan applications results in racially discriminatory loan decisions, it would violate the Fair Housing Act. If the AI ​​software in a driverless car causes an accident, the Product Liability Act states: A framework for follow-up treatments.

2) What are the risks of regulating a rapidly changing technology based on a snapshot of time? A classic example of this is stored communication code, which was enacted in 1986 to address new digital communication technologies such as email. In enacting the SCA, Congress provided much less privacy protection for emails more than 180 days old.

The logic was that limited storage meant that people were constantly cleaning out their inbox by deleting old messages to make room for new ones. As a result, messages stored for more than 180 days were considered less important from a privacy standpoint. It’s not clear if that reasoning makes sense at all, and it certainly doesn’t make sense in the 2020s, when the majority of emails and other stored digital communications are more than six months old.

A common response to concerns about technology regulation based on one snapshot in time is: If a law or regulation becomes outdated, update it. This is easier said than done. Most people would agree that the SCA is decades outdated. But because Congress has been unable to agree on exactly how to revise the 180-day clause, it is still on the books more than a third of a century after its promulgation.

3) What are the potential unintended consequences? the Allowing States and Victims to Fight the Internet Sex Trafficking Act of 2017 It was a law passed in 2018 that has been revised Section 230 of the Communications Decency Act with the aim of combating sex trafficking. While there is little evidence that it reduced sex trafficking, it did have an effect Big problematic effect On a different set of people: Sex workers who used to rely on websites were turned off by FOSTA-SESTA for sharing information about dangerous clients. This example shows the importance of taking a broad view of the potential impacts of the proposed regulations.

4) What are the economic and geopolitical implications? If US regulators intentionally slow progress in AI, it will simply drive investment and innovation—and the resulting job creation—elsewhere. While the emerging artificial intelligence raises many concerns, it also promises to bring enormous benefits in areas including education, Medicinemanufacturing, Transportation safetyfarming, forecasting the weather, getting legal services, and more.

I believe that AI regulations that are drafted with the above four questions in mind are more likely to successfully address the potential harms of AI while also ensuring access to its benefits.

Latest stories