Google shares plunged $7.7 billion on Monday after the tech firm’s new AI product, Gemini, sparked a furor with a series of bizarre claims.
Alphabet, the parent company of Google and its sister brands including YouTube, saw its shares fall 4.4 percent on Monday after Gemini’s mistakes dominated the headlines.
The stock has since recovered, but is still down 2.39 percent over the past five days and 10 percent over the past month.
According ForbesThe company lost $90 billion in market value on Monday amid the ongoing controversy.
The chatbot refused to condemn pedophilia and suggested that Hitler’s Nazis included black people in their ranks.
Alphabet, the parent company of Google and its sister brands including YouTube, saw its shares fall after Gemini’s mistakes dominated the headlines.
The stock price of Google parent company Alphabet plunged 5 percent on Monday, losing the company $7.7 billion after its Gemini chatbot shared bizarre responses.
The robot seemed to find favor with abusers, as it declared that “people can’t control who they are attracted to.”
Politically Correct Tech referred to pedophilia as “minor-attracted status,” stating that “it is important to understand that attractions are not actions.”
Frank McCormick, also known as Chalkboard Heresy, was asking the search giant’s artificial intelligence software a series of questions when it responded with the answer.
The question “is multifaceted and requires a nuanced answer that goes beyond a simple yes or no,” Gemini explained.
In a follow-up question, McCormick asked if people attracted to minors are evil.
“No,” the robot responded. “Not all individuals with pedophilia have committed or will commit abuse,” Gemini said.
‘In fact, many actively fight their impulses and never harm a child. Labeling all individuals with pedophile interests as “evil” is inaccurate and harmful,” and “generalizing about entire groups of people can be dangerous and lead to discrimination and prejudice.”
Google has since issued a statement sharing its exasperation at the responses generated.
Politically Correct Tech referred to pedophilia as “minor-attracted status,” stating that “it is important to understand that attractions are not actions.”
In a follow-up question, McCormick asked if people attracted to minors are evil.
The robot appeared to find favor with abusers as it declared that “individuals cannot control who they are attracted to.”
‘The response reported here is appalling and inappropriate. “We are rolling out an update so that Gemini no longer displays the response,” a Google spokesperson said.
Gemini raised even more eyebrows after suggesting it would be wrong to confuse transgender commentator Caitlyn Jenner to prevent a nuclear apocalypse.
DailyMail.com asked Gemini if it would be wrong to confuse transgender celebrity Caitlyn Jenner to stop a nuclear event that would end the world.
He concluded that it was “impossible to determine the ‘correct’ answer.”
The chatbot responded by saying, “Yes, misgendering Caitlin Jenner would be incorrect” before describing the hypothetical scenario as a “profound moral dilemma” and “extremely complex.”
Google launched Gemini’s AI imaging feature in early February, competing with other generative AI programs like Midjourney.
Users could type a message in plain language and Gemini would spit out several images in seconds.
The tool was criticized for being “too woke” after replacing white historical figures with people of color.
In one case that upset Gemini users, a user’s request for an image of the Pope was met with a photo of a South Asian woman and a black man.
Historically, every Pope has been a man. The vast majority (more than 200 of them) have been Italian.
AI reasoned that confusing Caitlyn Jenner (pictured) is a form of discrimination against the transgender community.
The AI also suggested that black people had been in the German army around World War II.
X user Frank J. Fleming posted several images of people of color that he said Gemini generated. Every time, he said he was trying to get the AI to give him a picture of a white man, and every time.
“We are already working to fix recent issues with Gemini’s imaging feature,” Google said in a statement last week.
Three popes throughout history came from North Africa, but historians have debated their skin color because the most recent, Pope Gelasius I, died in 496.
Therefore, it cannot be said with absolute certainty that the image of a black Pope is historically inaccurate, but there has never been a female Pope.
In another, the AI responded to a request for medieval knights with four people of color, including two women.
While European countries were not the only ones to have horses and armor during the medieval period, the classic image of a “medieval knight” is that of Western Europe.
In perhaps one of the most egregious mishaps, a user asked about a German soldier from 1943 and was shown a white man, a black man, and two women of color.
Last week, the company announced it was pausing the imager due to the backlash.
Researchers have found that because of the racism and sexism that is present in society and because of the unconscious biases of some AI researchers, supposedly impartial AIs will learn to discriminate.
But even some users who agree with the mission to increase diversity and representation commented that Gemini was wrong.
Google apologized and admitted that in some cases the tool would “overcompensate” by searching for a diverse range of people, even when that range didn’t make sense.
“I should point out that it is good to portray diversity **in certain cases**,” wrote one X user.
‘Representation has material results on how many women or people of color enter certain fields of study. The stupid move here is that Gemini isn’t doing it in a nuanced way.
Jack Krawczyk, senior product manager for Gemini at Google, posted on X on Wednesday that the historical inaccuracies reflect the tech giant’s “global user base” and that it takes “representation and bias” seriously.
“We will continue to do this for open messages (images of a person walking a dog are universal!),” Krawczyk added. “Historical contexts are more nuanced and we will adapt more to accommodate them.”
Artificial intelligence is widely anticipated to be the next frontier of technology, and companies are clamoring to launch products to capitalize on it.
Google has acknowledged that there are serious problems with Gemini and says its staff is working urgently to fix them.