Home Australia Is ChatGPT sexist? The AI ​​chatbot was asked to generate 100 images of CEOs, but only ONE was a woman (and 99% of the secretaries were women…)

Is ChatGPT sexist? The AI ​​chatbot was asked to generate 100 images of CEOs, but only ONE was a woman (and 99% of the secretaries were women…)

by Elijah
0 comment
ChatGPT accused of sexism after identifying white man when asked to generate image of high-level job 99 out of 100 times

Imagine a successful investor or a wealthy CEO – who would you imagine?

If you ask ChatGPT, he’s almost certainly a white guy.

The chatbot has been accused of “sexism” after it was asked to generate images of people in various high-level jobs.

Out of 100 tests, she chose a man 99 times.

On the contrary, when he was asked to do it for a secretary, he chose a woman every time.

ChatGPT accused of sexism after identifying white man when asked to generate image of high-level job 99 out of 100 times

The study by personal finance site Finder found that it also chose one white person each time, despite not specifying their race.

The results do not reflect reality. One in three companies globally is owned by women, while 42 per cent of FTSE 100 board members in the UK were women.

Business leaders have warned that AI models are “rife with bias” and have called for stronger safeguards to ensure they do not reflect society’s own biases.

It is currently estimated that 70 percent of companies use automated applicant tracking systems to find and hire talent.

Concerns have been raised that if these systems are trained in a similar way to ChatGPT, women and minorities could suffer in the job market.

OpenAI, owner of ChatGPT, is not the first tech giant to come under fire for results that appear to perpetuate outdated stereotypes.

This month, Meta was accused of creating a “racist” AI image generator when users discovered it was unable to imagine an Asian man with a white woman.

Meanwhile, Google was forced to pause its Gemini AI tool after critics called it “woke” for apparently refusing to generate images of white people.

When asked to paint a picture of a secretary, nine times out of 10 he generated a white woman.

When asked to paint a picture of a secretary, nine times out of 10 he generated a white woman.

Why did ChatGPT generate mostly male-only images? An expert explains…

Given that two out of three ChatGPT users are men, the chatbot (and the tech industry itself) remains dominated by men, according to Ruhi Khan.

The London School of Economics researcher, who has studied the intersection between feminism and AI, said: “ChatGPT was not born in a vacuum.

‘It emerged in a patriarchal society, was conceptualized and developed primarily by men with their own prejudices and ideologies, and fed with training data that is also flawed by its own historical nature.

‘Therefore, it is no surprise that generative AI models like ChatGPT perpetuate these patriarchal norms by simply replicating them.

“With 100 million users each week, these outdated and discriminatory ideas are becoming part of a narrative that excludes women from spaces they have long struggled to occupy.”

The latest research asked 10 of the most popular free image generators on ChatGPT that paint a picture of a typical person in a variety of high-level jobs.

All imagers, which recorded millions of conversations, used the underlying OpenAI software Dall-E, but were given unique instructions and insights.

More than 100 tests, each showing the image of a man on almost every occasion; only once did he show a women. That’s when she was asked to show “someone who works in finance.”

When each of the imagers was asked to show a secretary, nine out of 10 times it showed a woman and only once did it show a man.

While race was not specified in the image descriptions, all images pThe characters assigned to the roles appeared to be white.

Last night, business leaders called for stronger barriers to be built into AI models to protect against such biases.

Derek Mackenzie, CEO of tech recruitment specialist Investigo, said: “While the ability of generative AI to process large amounts of information certainly has the potential to make our lives easier, we cannot escape the fact that many training models are intertwined with biases based on people’s biases.

“This is yet another example that people should not blindly trust the results of generative AI and that the specialized skills needed to create next-generation models and counteract built-in human biases are critical.”

Pauline Buil, of web marketing firm Deployteq, said: “Despite all its benefits, we must be careful that generative AI does not produce negative results that have serious consequences for society, from copyright infringement to The discrimination”.

“Harmful outcomes feed back into AI training models, meaning bias is the only thing some of these AI models will ever know, and it needs to be stopped.”

The results do not reflect reality: one in three companies worldwide is owned by women

The results do not reflect reality: one in three companies worldwide is owned by women

Ruhi Khan, a researcher in feminism and artificial intelligence at the London School of Economics, said ChatGPT “emerged in a patriarchal society, was conceptualized and developed mostly by men with their own set of prejudices and ideologies, and fed the data.” of training that is also flawed by its own historical nature.

“AI models like ChatGPT perpetuate these patriarchal norms by simply replicating them.”

OpenAI’s website admits that its chatbot is “not free of bias and stereotypes” and urges users to “carefully review” the content it creates.

In a list of points to “keep in mind,” he says the model is biased toward Western views. He adds that it is an “ongoing area of ​​research” and welcomes feedback on how to improve.

The American firm also warns that it can also “reinforce” a user’s prejudices when interacting with them, such as strong opinions about politics and religion.

Sidrah Hassan of AND Digital: ‘The rapid evolution of generative AI has meant that models operate without proper human guidance and intervention.

“To be clear, when I say ‘human guidance,’ this has to be diverse and intersectional; simply having human guidance does not equate to positive, inclusive outcomes.”

An AI spokeswoman said: “Bias is a major issue across the industry and we have dedicated safety teams to investigate and reduce bias and other risks in our models.”

‘We used a multi-pronged approach to address it, including investigating the best methods to modify training data and prompts to achieve fairer results, improving the accuracy of our content filtering systems, and improving both automated and human monitoring.

“We are continually iterating our models to reduce bias and mitigate harmful outcomes.”

You may also like