Google's generative AI failure will 'slowly erode our trust in Google'

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

It was a busy Memorial Day weekend for Google ( GOOG , GOOGL ) as the company raced to tame results from a number of wild suggestions through a new AI review feature in its search platform. If you've been sunbathing on the beach or downing hot dogs and beer instead of scrolling through Instagram (META) and X, allow me to put you up to speed.

AI Review is supposed to provide creative AI-based answers to search queries. Usually, it does. But over the past week, customers have also been told that they can use non-toxic glue to keep the cheese from sliding off their pizzas, that they can eat a rock a day, and claimed that Barack Obama is the first Muslim. is the president

Google responded by taking down the responses and saying it was using the errors to improve its system. But the events, combined with Google's disastrous Gemini image generator launch that allowed the app to create historically inaccurate images, could seriously damage the search giant's reputation.

“Google is considered the largest source of information on the Internet,” explained Chinmay Hegde, associate professor of computer science and engineering at NYU's Tandon School of Engineering. “And if this product is watered down, it will slowly erode our trust in Google.”

Google's AI review problems aren't the first time the company has run into trouble since it began its generative AI drive. The company's Bard chatbot, which Google rebranded as Gemini in February, famously revealed a flaw in one of its responses in a promo video in February 2023, sending Google shares sliding.

Next was his Gemini Image Generator software, which in 1943 produced images of diverse groups of people in faux settings, including German soldiers.

Alphabet CEO Sundar Pichai speaks at the Google I/O event on May 14, 2024 in Mountain View, California. Bloopers — some funny, some disturbing — have been shared on social media since Google rolled out its search page overhaul that often features AI-generated summaries above search results. (AP Photo/Jeff Chew, File) (Concerned Institution)

AI has a history of bias, and Google tried to overcome that by including a wider diversity of races when creating images of people. But the company overcorrected, and the software rejected some requests for photos of people of certain backgrounds. Google responded by temporarily taking the software offline and apologizing for the episode.

Meanwhile, AI review problems arose because Google said users were asking unusual questions. In the rock-eating example, a Google spokesperson said that “it appears that a website about geology was syndicating articles on the subject from other sources onto its site, and included an article that was originally AI review was published on Onion linked to this source.

These are fine explanations, but the reality is that Google continues to release products with flaws that need to be addressed.

“At some point, you have to stand by the product that you roll out,” said Derek Leben, associate teaching professor of business ethics at Carnegie Mellon University's Tepper School of Business.

“You can't just say … 'We're going to add AI to all of our well-established products, and it's also in permanent beta mode, and hold us responsible for any bugs or issues. Can't be held, and even blamed for that, 'in terms of just relying on the product.

Google is the go-to website for finding facts online. Whenever I get into an argument with a friend about some frivolous topic, one of us inevitably yells, “Okay, Google it!” And chances are you've done the same. Maybe not because you wanted to prove you knew some obscure Simpsons fact better than your friend, but still. The thing is, Google has built a reputation for reliability, and its AI flubs are slowly eating away at it.

So why the slip? Hegde says the company is releasing products before they're ready in an effort to outpace rivals like Microsoft ( MSFT ) and OpenAI.

“The pace of research is so fast that the gap between research and product seems to be shrinking significantly, and that's causing problems at all levels,” he explained.

Google has been racing to overtake Microsoft and OpenAI since the two teamed up in February 2023 to release a generative AI-powered version of its Bing search engine and chatbot. OpenAI also succeeded in pre-empting Google from its i. At the /O developer conference earlier this month, it announced its powerful GPT-4o AI model a day before the show began.

But if beating the competition means releasing products that produce errors or harmful information, Google risks giving consumers the impression that its creative AI efforts can't be trusted and, ultimately, Unable to use.

Subscribe to the Yahoo Finance Tech Newsletter. (Yahoo Finance)

Email Daniel Hawley at Follow him on Twitter. @DanielHowley.

Click here for the latest technology news that will impact the stock market.

Read the latest financial and business news from Yahoo Finance

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment