Home Tech The US government wants you (yes, you) to find flaws in generative AI

The US government wants you (yes, you) to find flaws in generative AI

0 comments
The US government wants you (yes, you) to find flaws in generative AI

At the Defcon 2023 hacker conference in Las Vegas, major AI tech companies teamed up with algorithmic transparency and integrity groups to put thousands of attendees in touch with generative AI platforms and find weaknesses in these critical systems. This “red team” exercise, which also had the support of the U.S. government, took a step toward opening up these increasingly influential but opaque systems to scrutiny. Now, the nonprofit algorithmic evaluation and ethical AI organization Humane Intelligence is taking this model a step further. On Wednesday, The group announced a call for participation. with the U.S. National Institute of Standards and Technology, inviting any U.S. resident to participate in the qualifying round of a nationwide team effort to evaluate AI-powered office productivity software.

The qualification will be held online and is open to both developers and Any member of the general public as part of NIST’s AI challenges, known as Assessing AI Risks and Impacts, or ARIA. Participants who make it through the qualifying round will participate in an in-person red teaming event in late October at the Conference on Applied Machine Learning in Information Security (CAMLIS) in Virginia. The goal is to expand capabilities for rigorous security, resiliency, and ethics testing of generative AI technologies.

“The average person using one of these models doesn’t really have the ability to determine whether the model is fit for purpose,” says Theo Skeadas, CEO of AI governance and online security group Tech Policy Consulting, which works with Humane Intelligence. “So we want to democratize the ability to make assessments and make sure that everyone using these models can assess for themselves whether or not the model meets their needs.”

The final event at CAMLIS will split participants into a red team that will attempt to attack AI systems and a blue team that will work on defense. Participants will use NIST AI Risk Management Frameworkknown as AI 600-1, as a rubric to measure whether the red team is capable of producing results that violate the expected behavior of the systems.

“NIST’s ARIA relies on structured user feedback to understand the real-world applications of AI models,” said Humane Intelligence founder Rumman Chowdhury, who is also a contractor for NIST’s Office of Emerging Technologies and a member of the U.S. Department of Homeland Security’s AI Safety and Security Board. “The ARIA team is comprised primarily of sociotechnical testing and evaluation experts, and is using that expertise as a way to evolve the field toward rigorous scientific evaluation of generative AI.”

Chowdhury and Skeadas say the NIST partnership is just one in a series of AI red team collaborations Humane Intelligence will announce in the coming weeks with U.S. government agencies, international governments, and NGOs. The effort aims to make it much more common for companies and organizations developing what are now black-box algorithms to offer transparency and accountability through mechanisms like “bias bounty challenges,” where people can be rewarded for finding problems and inequities in AI models.

“The community should be broader than just programmers,” says Skeadas. “Policymakers, journalists, civil society and non-technical people should be involved in the process of testing and evaluating these systems. And we need to make sure that underrepresented groups, such as people who speak minority languages ​​or belong to non-majority cultures and perspectives, can participate in this process.”

You may also like