Take away food:
-
A new federal agency to regulate AI sounds useful, but could be overly influenced by the tech industry. Instead, Congress is accountable.
-
Instead of licensing companies to release advanced AI technologies, the government could license auditors and urge companies to establish institutional review boards.
-
The government has not had great success curbing technology monopolies, but disclosure requirements and data privacy laws can help check corporate power.
Sam Altman, CEO of OpenAI, urged lawmakers to regulate AI his testimony to the Senate on May 16, 2023. That recommendation raises the question of what comes next for Congress. The solutions Altman suggested — creating an AI regulatory agency and requiring companies to license — are interesting. But what the other experts in the same panel suggested is just as important: require transparency about training data And setting up clear frameworks for AI-related risks.
Another unspoken point was that given the economics of building large-scale AI models, the industry may be witnessing the emergence of a new type of technology monopoly.
As a researcher studies social media and artificial intelligenceI believe that Altman’s suggestions have highlighted important issues, but do not provide answers on their own. Regulation would be helpful, but in what form? Licenses also make sense, but for whom? And any attempt to regulate the AI industry will have to take into account the economic power and political clout of the companies.
An agency to regulate AI?
Legislators and policymakers around the world have already begun to address some of the issues raised in Altman’s testimony. The AI law of the European Union is based on a risk model that assigns AI applications to three risk categories: unacceptable, high risk, and low or minimal risk. This format recognizes that tools for social scores by governments And automated hiring tools pose risks other than, for example, the use of AI in spam filters.
The US National Institute of Standards and Technology also has one AI risk management framework what is made with extensive input by multiple stakeholdersincluding the U.S. Chamber of Commerce and the Federation of American Scientists, as well as other business and professional associations, technology companies, and think tanks.
Federal agencies such as the Equal Employment Opportunity Commission and the Federal Trade Commission have already issued guidance on some of the risks inherent in AI. The Consumer Product Safety Commission and other agencies also have a role to play.
Instead of setting up a new agency that at risk of being in danger through the technology industry it is supposed to regulate, Congress can drive private and public acceptance of the NIST risk management framework and pass bills like the Algorithmic Accountability Law. That would have the effect of accountabilityas much as the Sarbanes-Oxley law and other regulations have changed reporting requirements for companies. Congress can too adopt comprehensive data privacy laws.
Regulating AI should involve collaboration between academia, industry, policy experts and international bodies. Experts have compared this approach to international organizations such as the European Organization for Nuclear Research, known as CERN, and the Intergovernmental Panel on Climate Change. The internet has been managed by non-governmental organizations involving non-profits, civil society, business and policy makers, such as the Internet Corporation for assigned names and numbers and the World Telecommunications Standardization Meeting. Those examples provide models for today’s industry and policy makers.
Licensed accountants, not companies
While OpenAI’s Altman suggested that companies could be licensed to release artificial intelligence technologies to the public, he clarified that he was referring to artificial general intelligence, meaning potential future AI systems with human intelligence that could pose a threat to humanity. That would be similar to companies being licensed to deal with other potentially dangerous technologies, such as nuclear power. But licensing could come into play long before such a future scenario materializes.
Algorithmic auditing would require identification, standards of practice and extensive training. Demanding accountability is not just a matter of licensing individuals, but also requires company-wide standards and practices.
AI fairness experts argue that issues of bias and fairness in AI cannot be addressed with technical methods alone, but require more comprehensive risk mitigation practices such as establishment of institutional review boards for AI. For example, institutional review boards in the medical field help enforce individual rights.
Academic bodies and professional associations have also set standards for the responsible use of AI, whether or not it is authorship standards for AI-generated text or standards for patient-mediated data exchange in medicine.
Strengthening existing consumer safety, privacy and protection statutes and introducing algorithmic accountability standards would help demystify complex AI systems. It is also important to recognize that greater data accountability and transparency can place new constraints on organizations.
Data privacy and AI ethics scholars have called for “technologically due processand frameworks to recognize the harm of predictive processes. The widespread use of AI-assisted decision-making in areas such as employment, insurance and healthcare requires licensing and audit requirements to ensure procedural fairness and privacy safeguards.
However, requiring such accountability provisions requires: robust debate among AI developers, policy makers, and those impacted by widespread AI deployment. In the absence of strong algorithmic accountability practicesthe danger is narrow audits that promote the appearance of compliance.
AI monopolies?
Also missing from Altman’s testimony is the size of the investment needed to train large-scale AI models, whether that be GPT-4that is one of the foundations of ChatGPTor text-to-image generator Stable spread. Only a handful of companies, such as Google, Meta, Amazon and Microsoft, are responsible for this developing the world’s largest language models.
Given the lack of transparency in the training data used by these companies, AI ethics experts Timnit Gebru, Emily Bender and others have warned that large-scale adoption of such technologies without associated oversight risks reinforcing machine bias on a societal scale.
It is also important to recognize that the training data for tools like ChatGPT includes the intellectual labor of a large number of people, such as Wikipedia contributors, bloggers, and authors of digitized books. However, the economic benefits of these tools only benefit the technology companies.
Proving the monopoly position of technology companies can be difficult, as in the Justice Department’s antitrust case against Microsoft demonstrated. I believe the most viable regulatory options for Congress to address potential algorithmic harms from AI may be to strengthen disclosure requirements for both AI companies and users of AI, push for expanded adoption of AI risk assessment frameworks and to require processes that protect individual data rights and privacy.
Learn what you need to know about artificial intelligence by signing up for our newsletter series of four emails delivered over the course of a week. You can read all our stories about Generative AI at TheConversation.com.