14.3 C
London
Sunday, May 28, 2023
HomeTechProcedural justice can address the trust/legitimacy issue of generative AI

Procedural justice can address the trust/legitimacy issue of generative AI

Date:

The much-lauded advent of generative AI has reignited a well-known debate about trust and security: Can tech executives be trusted to take the interests of society to heart?

Because the training data is human-made, AI is inherently prone to bias and therefore subject to our own imperfect, emotionally driven ways of seeing the world. We know all too well the risks, from reinforcing discrimination and racial inequality to promoting polarization.

OpenAI CEO Sam Altman has asked us “patience and good faith” as they work to “get it right.”

For decades, we’ve patiently placed our trust in tech executives at our peril: they made it, so we believed them when they said they could fix it. Trust in technology companies continues to fall, and according to the Edelman Trust Barometer 2023 globally, 65% are concerned technology will make it impossible to know if what people see or hear is real.

It’s time for Silicon Valley to embrace a different approach to earning our trust – one that’s proven effective in the nation’s justice system.

A procedural justice approach to trust and legitimacy

Procedural justice is grounded in social psychology and is based on research showing that people believe institutions and actors are more trustworthy and legitimate when they are listened to and experienced neutral, unbiased and transparent decision-making.

Four major components of procedural fairness are:

  • Neutrality: Decisions are unbiased and guided by transparent reasoning.
  • Respect: All are treated with respect and dignity.
  • Voice: Everyone has a chance to tell their side of the story.
  • Reliability: Decision makers convey reliable motives to those affected by their decisions.

By using this framework, police have improved trust and cooperation in their communities and some social media companies are starting to do so use these ideas to shape governance and moderation approaches.

Here are a few ideas for how AI companies can adapt this framework to build trust and legitimacy.

Assemble the right team to answer the right questions

If UCLA Professor Safiya Noble argues, the questions surrounding algorithmic bias cannot be solved by engineers alone, as they are systemic social issues that require humanistic perspectives – outside of any business – to ensure societal conversation, consensus, and ultimately regulation – both from themselves and the government .

In “System Failure: Where Big Tech Went Wrong and How We Can Reboot,” three Stanford professors critically discuss the shortcomings of computer science training and tech culture because of its obsession with optimization, often pushing aside the core values ​​of a democratic society.

In a blog post, Open AI says it values ​​societal input: “Because the benefits of AGI are so great, we do not believe it is possible or desirable for society to halt its development forever; instead, society and AGI’s developers need to figure out how to do it right.

However, the recruitment page and founder of the company Sam Altman’s tweets show that the company is hiring masses of machine learning engineers and computer scientists because “ChatGPT has an ambitious roadmap and is hampered by engineering.”

Are these computer scientists and engineers equipped to make decisions that, as OpenAI has said, “will require much more caution than society usually applies to new technologies”?

Technology companies should hire multidisciplinary teams, including social scientists who understand the human and societal impact of technology. With a variety of perspectives on how to train AI applications and implement safety parameters, companies can transparently motivate their decisions. This, in turn, can increase the public’s perception of the technology as neutral and reliable.

Including outside perspectives

Another element of procedural fairness is giving people the opportunity to participate in a decision-making process. In a recent blogging post about how the OpenAI company is tackling bias, the company said it is seeking “outside input on our technology,” pointing to a recent red teaming exercise, a process of assessing risk through a hostile approach.

While red teaming is an important process for evaluating risk, it must involve outside input. In Red teaming exercise from OpenAI, 82 of the 103 participants were employees. Of the remaining 23 participants, most were computer scientists from predominantly Western universities. To get different points of view, companies need to look beyond their own employees, disciplines and geography.

They can also enable more direct feedback in AI products by giving users more control over how the AI ​​performs. They may also consider providing opportunities for public comment on new policies or product changes.

Provide transparency

Companies must ensure that all rules and related safety processes are transparent and convey reliable rationale for how decisions were made. For example, it is important to provide the public with information about how the applications are trained, where data is obtained from, what role people play in the training process, and what security layers are in place to minimize misuse.

Enabling researchers to monitor and understand AI models is key to building trust.

Altman got it right in a recent ABC news interview when he said, “Society, I think, has a limited amount of time to figure out how to respond to that, how to regulate that, how to deal with it.”

A procedural justice approach, rather than the opacity and blind faith of technology predecessors’ approach, allows companies building AI platforms to engage society in the process and earn trust and legitimacy, not demand it.

Jackyhttps://whatsnew2day.com/
The author of what'snew2day.com is dedicated to keeping you up-to-date on the latest news and information.

Latest stories

spot_img