Liz Reed, Vice President, Search, Google speaks during an event in New Delhi on December 19, 2022.
Sajjad Hussain AFP | Getty Images
Google's new head of search said at an all-hands meeting last week that mistakes will be made as artificial intelligence is more integrated into Internet search, but that the company must push the product forward and empower employees and customers to find problems. I should help.
“It's important that we block features just because there might be occasional problems, but as soon as we find problems, we fix them,” said Liz Reed, who took over as vice president of search in March. The developments were made, according to audio obtained by CNBC, of the company-wide meeting.
“I don't think we should get away with not taking risks,” Reid said. “We have to take them seriously. We have to act quickly. When we find new problems, we have to do extensive testing but we won't always find everything and that just means we answer me.”
Reid's comments come at a critical moment for Google, which is trying to keep pace with OpenAI and Microsoft in generative AI. Since OpenAI introduced ChatGPT in late 2022, the market for chatbots and related AI tools has exploded, giving users a new way to find information online beyond traditional search. The method is found.
Google's rush to push new products and features has led to a series of embarrassments. Last month, the company released AI Overview, which CEO Sundar Pichai called the biggest change to search in 25 years, to a limited audience, allowing users to see a summary of answers to queries right above Google search. The company plans to roll out this feature globally.
Although Google had been working on the AI overview for more than a year, users quickly realized that the questions were giving nonsensical or incorrect answers, and they had no way to opt out. Among the widely circulated findings were the false claim that Barack Obama was the first Muslim president of the United States, suggesting that consumers put glue on their pizza and try to eat at least one rock a day.
Google worked to fix the errors. Reed, a 21-year veteran of the company, published a blog post on May 30 mocking the “troll-y” content posted by some users, but acknowledging that the company had more than a dozen of technological improvements, including user-generated content and limiting health. advice
“You've seen the stories about pizza glue, eating rocks,” Reed told employees at the all-hands meeting. Reed was introduced on stage by Prabhakar Raghavan, who runs Google's knowledge and information organization.
A Google spokesperson said in an emailed statement that the “vast majority” of results were accurate and that the company “found policy violations in less than one out of every 7 million unique queries that AI reviewed came up with.” “
“As we've said, we're continuing to refine when and how we display AI reviews to make them as useful as possible, including a number of technical updates to improve the quality of responses,” the spokesperson said. ” said the spokesperson.
AI review errors fell into a pattern.
Shortly before launching its AI chatbot Bard, now called Gemini, last year, Google executives faced challenges posed by ChatGPT, which had gone viral. Jeff Dean, Google's chief scientist and longtime head of AI, said in December 2022 that the company had too much “reputational risk” and needed to proceed “more conservatively than a small startup” because chatbots I still have many accuracy issues.
But Google pushed ahead with its chatbot, and was criticized by shareholders and employees for a “butched” launch that, some said, was hastily orchestrated to meet Microsoft's announcement timeline. .
A year later, Google rolled out its AI-powered Gemini image generation tool, but halted the product after users discovered historical errors and questionable answers that were circulating widely on social media. lay down Pichai sent a company-wide email at the time, saying the errors were “unacceptable” and “demonstrated prejudice.”
Reid's posture suggests that Google has become more willing to admit mistakes.
“At the scale of the web, with billions of queries coming in every day, some oddities and errors are bound to happen,” he wrote in his recent blog post.
Some of the AI review questions from users were deliberately hostile and many of the worst questions listed were fake, Reid said.
“People actually created templates on how to get social engagement by creating fake AI reviews so that's an additional thing we're thinking about,” Reid said.
He said the company does “a lot of testing ahead of time” as well as “red teaming,” which involves efforts to find vulnerabilities in technology before they can be discovered by outsiders.
“No matter how much Reid teaming we do, we're going to need to do more,” Reid said.
By going live with the AI products, the teams were able to spot problems like “data voids,” which happen when the web doesn't have enough data to properly answer a particular question, Reid said. . They were also able to identify comments from a particular web page, detecting sarcasm and correct spelling.
“We don't just need to understand site or page quality, we need to understand every single part of a page,” Reid said of the challenges facing the company.
Reed thanked employees from various teams who worked on the fix and emphasized the importance of employee feedback, directing staff to report bugs to the internal link.
“Whenever you see problems, they can be small, they can be big,” he said. “Please file them away.”