Google announced Thursday that it would refine and reshape its artificial intelligence-generated search result summaries. publishing a blog explaining why the feature returned strange and inaccurate responses that included telling people to eat rocks or add glue to pizza sauce. The company will reduce the scope of searches that will return an AI-written summary.
Google has added several restrictions on the types of searches that would generate AI Overview results, said the company’s head of search, Liz Reid, in addition to “limiting the inclusion of satire and humor content.” The company is also taking action against what it described as a small number of AI overviews that violate its content policies, which it said occurred in fewer than 1 in every 7 million unique search queries in which it appeared. the function.
The AI Overviews feature, which Google launched in the US this month, quickly produced viral examples of the tool misinterpreting information and appearing to use satirical sources like Onion or joke posts on Reddit to generate responses. Google’s AI failures later became a meme, with fake screenshots of absurd and obscure answers circulating widely on social media platforms alongside the tool’s actual failures.
Google touted its AI Overviews feature as one of the pillars of the company’s broader push to incorporate generative artificial intelligence into its core services, but its launch led the company to once again face public shaming after the launch of a new AI product. Google faced public backlash and ridicule earlier this year after its AI imaging tool mistakenly inserted people of color into ahistorical situations, including creating images of black people as German WWII soldiers. World.
Google’s blog gave a brief summary of what had gone wrong with AI Overviews and defended it, with Reid claiming that many of AI Overviews’ genuine falsehoods were the result of gaps in information due to strange or unusual searches. Reid also claimed that there had been intentional attempts to manipulate the feature so that it produced inaccurate responses.
“There’s nothing like having millions of people using the feature with lots of fresh searches,” Reid said in the post. “We have also seen new meaningless searches, apparently intended to produce erroneous results.”
In fact, many of the viral posts came from strange searches like “how many rocks should I eat?”, which returned a result based on an Onion article titled Geologists Recommend Eating At Least One Small Rock Per Day, but others seemed to be more. reasonable. queries. An AI expert shared an image of an AI analysis that claimed Barack Obama had been the first Muslim president of the United States, a common conspiracy theory on the right.
“By looking at examples from the past few weeks, we were able to determine patterns where we didn’t get it right and made more than a dozen technical improvements to our systems,” Reid said.
Although Google’s blog frames the problems with AI overviews as a series of edge cases, several artificial intelligence experts I have commented that its problems are indicative of broader problems related to AI’s ability to measure the accuracy of facts and problems related to automating access to information.
Google stated in its post that “user feedback shows” that people are more satisfied with its search results due to AI overviews, but the broader implications of its AI tools and changes to its search features search are still unclear. Website owners worry that AI summaries will be disastrous for online media, sapping sites’ traffic and ad revenue, while some researchers worry that Google will consolidate even more control over what the public sees on the Internet.