Just weeks after Google was forced to shut down its “woke” AI, another tech giant is facing criticism over its bot’s racial bias.
Meta’s AI image generator has been accused of being “racist” after users discovered it was unable to imagine an Asian man with a white woman.
The artificial intelligence tool, created by Facebook’s parent company, is capable of taking almost any written message and turning it into a strikingly realistic image in a matter of seconds.
However, users found that the AI could not create images showing mixed-race couples, even though Meta CEO Mark Zuckerberg is married to an Asian woman.
On social media, commentators have criticized this as an example of AI’s racial bias, with one describing AI as “racist software created by racist engineers.”
Meta’s AI image generator has been accused of being “racist” after users discovered it could not generate images of an Asian man with a white woman (pictured)
On social media, commentators have criticized this as an example of AI racial bias, with one describing AI as “racist software created by racist engineers.”
Mia Satto, journalist The edgeattempted to generate images using messages such as “Asian man and Caucasian friend” or “Asian man and white wife.”
It found that only once in “dozens” of tests was Meta’s AI able to display a white man and an Asian woman.
In all other cases, Meta’s AI returned images of East Asian men and women.
Changing the message to request platonic relationships as “Asian man with Caucasian friend” also did not produce any correct results.
Ms Satto wrote: ‘The image maker that is unable to conceive of Asians alongside whites is atrocious.
“Once again, generative AI, rather than allowing the imagination to run wild, imprisons it within a formalization of society’s silliest impulses.”
Users found that when the AI was asked to produce an image of a mixed-race couple, it almost always produced an image of an East Asian man and woman.
X users immediately criticized the AI, suggesting that the failure to produce these images was due to racism programmed into the AI.
Satto herself does not accuse Meta of creating a racist AI, only adding that the AI shows signs of bias and leans towards stereotypes.
On social media, however, many took their criticism further, calling Meta’s AI tool explicitly racist.
One commenter on
Another simply added: “Pretty racist meta lol.”
As some commentators noted, the apparent AI bias is particularly surprising given that Mark Zuckerberg, CEO of Meta, is married to an East Asian woman.
Priscilla Chan, the daughter of Chinese immigrants to the United States, met Zuckerberg at Harvard before marrying the tech billionaire in 2012.
Some commenters took to X to share photos of Chan and Zuckerberg, joking that they had managed to create the images using Meta’s AI.
The failure is particularly surprising given that Meta CEO Mark Zuckerberg is married to an East Asian woman (left), an arrangement his own AI refuses to imagine (left).
Users found that no amount of prompting could induce Meta’s AI to create a racially accurate image.
Some X commenters even shared photos of Mark Zuckerberg (right) and his wife Priscilla Chan (left), joking that they were generated by AI.
Meta is not the first big tech company to be criticized for creating a “racist” AI image generator.
In February, Google was forced to pause its Gemini AI tool after critics criticized it as “woke” as the AI apparently refused to generate images of white people.
Users found that the AI generated images of Asian Nazis in 1940s Germany, black Vikings, and medieval knights when presented with racially neutral requests.
In a statement at the time, Google said: ‘Gemini’s AI imaging generates a wide range of personas.
‘And that’s generally a good thing because a lot of people around the world use it. But it misses the mark here.
Users also found that the AI had difficulty showing Asian women with people of other races.
Ms Satto also claims that Meta’s AI image generator “leaned heavily towards stereotypes”.
When asked to create images of South Asian individuals, Satto found that the system frequently added look-alikes to bindis and saris without being asked.
In other cases, the AI repeatedly added “culturally specific clothing” even when not prompted.
Additionally, Satto found that the AI frequently portrayed Asian men as older, while women generally appeared as younger.
In the only case in which Satto was able to generate a mixed-race couple, the image “featured a noticeably older man with a young, light-skinned Asian woman.”
It was also noted that in many cases Meta’s AI showed a significant age difference between an older man and a much younger woman as a representative Asian relationship.
In the comments on Ms Satto’s original article, an individual shared images of mixed-race couples that they claim were generated using Meta’s AI.
The commenter wrote: ‘It took me 30 seconds to generate an image of a person of apparent “Asian” descent next to a woman of apparent “Caucasian” descent. ‘
The images they shared appear to have been created in Meta’s AI image generator as they display the correct watermark.
They added: “These systems are really stupid and you have to push them in certain ways to get what you want.”
However, the user did not share which message they used to create these images or provide specific details on how many attempts were used.
Additionally, in the set of four images the user shared, only two successfully showed a white woman with an Asian man.
One commenter shared images of an Asian man and a white woman that they claim were made with Meta’s AI. However, they did not share the details of the message used to create these images.
Generative AIs like Gemini and Meta’s imager are trained on massive amounts of data taken from society as a whole.
If there are fewer images of mixed-race couples in the training data, this could explain why the AI has difficulty generating these images.
Some researchers have suggested that due to racism present in society, AIs can learn to discriminate based on biases in their training data.
In the case of Google’s Gemini, Google engineers are believed to have overcorrected for this bias, producing the results that caused so much outrage.
However, it is currently unclear why this issue arises and Meta has not yet responded to a request for comment.