Google on Thursday said it was restricting its new AI-generated search results after the tool produced “odd, inaccurate or unhelpful” summaries that went viral on social media, the latest in a series of high profile AI flubs for the search engine giant after its AI image generator produced historically inaccurate results earlier this year.
Key Takeaways
- Google’s head of search, Liz Reid, said in a blog post that the company would be scaling back the use of its newly implemented AI search tool, AI Overviews, and implementing additional guardrails to the technology after it was automatically rolled out to users across the U.S. a fortnight ago.
- The tool, designed to improve search, uses generative AI to summarize search queries at the top of the results page and though Reid said the feature was a valuable addition to Google Search, she acknowledged several high profile failures going viral on social media — the search engine told people to eat rocks, put glue on their pizzas and that Barack Obama was Muslim, to list a few — that highlighted areas for improvement.
- Reid said Google had updated its systems to limit the AI’s use of user-generated content like social media and forum posts when generating responses, as these are more prone to “offer misleading advice.”
- The search giant will also pause the AI from showing summaries on certain topics where the stakes are higher, notably queries related to health, as well as limiting summaries for “nonsensical,” humorous and satirical queries that appear to be designed to elicit similarly unserious responses.
- The company has also “added triggering restrictions for queries where AI Overviews were not proving to be as helpful,” Reid said, adding that Google has “made more than a dozen technical improvements” overall.
- Despite the viral flubs, Reid defended the feature and said AI Overviews has led to “higher satisfaction” among users and people asking “longer, more complex questions that they know Google can now help with.”
What Happened With Google’s Ai Overviews?
Google, by far the world’s most popular search engine, automatically rolled out AI Overviews to U.S. users earlier this month. The rollout, which pushes the typical links associated with a search result further down the page after an AI generated answer, was automatic and as the tool cannot be disabled, it sparked a degree of backlash among users.
Of greater concern to Google were the inaccurate, strange and sometimes downright ridiculous summaries that began spreading on social media, similar to how inaccurate images its AI tool Gemini produced spread earlier this year. While many of the posts going viral on social media were genuine, such as telling people with kidney stones to drink litres of urine to help pass a kidney stone or that eating rocks is good for your health, a good number were not.
Fact checkers have dismissed a variety of doctored or fake summary screenshots spreading round social media, such as responses saying doctors recommend pregnant people smoke 2-3 cigarettes a day, suggesting depressed users jump off the Golden Gate Bridge and providing instructions for self-harm, and Reid said Google encourages “anyone encountering these screenshots to do a search themselves to check.”
Many of the odd queries are the results of a “data void” or “information gap,” Reid explained, occupying areas where there is little reliable information, which means satirical content, such as the recommendation to eat rocks, can slip through.
Get Forbes Breaking News Text Alerts: We’re launching text message alerts so you’ll always know the biggest stories shaping the day’s headlines. Text “Alerts” to (201) 335-0739 or sign up here.
Crucial Quote
“At the scale of the web, with billions of queries coming in every day, there are bound to be some oddities and errors,” Reid said. “We’ve learned a lot over the past 25 years about how to build and maintain a high-quality search experience, including how to learn from these errors to make Search better for everyone.
“We’ll keep improving when and how we show AI Overviews and strengthening our protections, including for edge cases, and we’re very grateful for the ongoing feedback.”