AI is coming to the search business. Or so we are told. As Google seems to be getting worse, and tools like ChatGPT, Google Gemini, and Microsoft Copilot are getting better, it looks like we’re headed for a new way of finding and consuming information online. have been. Companies like Perplexity and You.com are touting themselves as next-generation search products, and even Google and Bing are making big bets that AI is the future of search. Bye, 10 blue links; Direct answers to all my weird questions about Hello World.
But what you have to understand about search engines is that search engines are many things. For all the people who use Google to access hard and important scientific information, to search their email inbox, visit Walmart’s website, or remember who the president was before Hoover. . And then there’s my favorite fact: every year a large number of people go to Google and type “google” into the search box. We mostly talk about Google as a research tool, but in reality, it is called upon to do anything and everything you can think of, billions of times a day.
The real question facing all these Google killers isn’t how well they can get information. It can do everything Google does. So I decided to put some of the best new AI products to the real test: I grabbed the most recent list of Google queries and questions according to SEO research firm Ahrifs and put them into various AI tools. In some instances, I’ve found that these language model-based bots are actually more useful than Google’s results page. But for the most part, I discovered exactly how difficult it would be for anything — AI or otherwise — to replace Google at the center of the web.
People who work in search always say that there are basically three types of questions. First and most popular is navigation, which is simply people typing in a website name to navigate to that website. Almost all of the top queries on Google, from “YouTube” to “Wordall” to “Yahoo Mail”, are navigation queries. In reality, this is the main job of a search engine: to get you to a website.
In reality, the main job of a search engine is to get you to a website.
For navigational queries, AI search engines are universally worse than Google. When you do a navigational Google search, it’s rare that the first result isn’t what you’re looking for—of course, it’s weird to show you all those results when Google actually takes you directly to Amazon. have to go. com or whatever, but it’s fast and it’s rarely wrong. AI bots, on the other hand, like to think for a few seconds and then provide a bunch of semi-useful information about a company when all I want is a link. Some didn’t even link to amazon.com.
I don’t hate extra information as much as I hate it. long These AI tools lead me to get what I need. Waiting 10 seconds for three paragraphs of prepared text about Home Depot is not the answer. I just want the Home Depot link. Google wins that race every time.
The next most popular type of search is the informational query: you want to know something specific, about which there is only one right answer. “NFL Score” is a very popular informational question. “What time is it” is another; So is “weather”. It doesn’t matter who tells you the score or the time or the temperature, there’s only one thing you need to know.
Here, the results are all over the map. For real-time things like sports scores, the AI shouldn’t be trusted: You.com and Perplexity both often gave me outdated information, though Copilot usually got it right. Google not only corrects this, but usually pops up a widget with other stats and information, which is better than the others. Just like anything that requires your specific location or context — Google probably has that information about you, but AI bots mostly don’t.
When it comes to more evergreen information like “how many weeks in a year” or “when is Mother’s Day”, everything I tried turned out to be correct. In many cases I actually preferred the AI responses, which add a bit of helpful context. But I’m not sure how often I can trust them. Google told me there are 52.1429 weeks in a year, but You.com explained that it’s actually 52 weeks and one day, plus an extra day in leap years. It’s more useful than just 52.1429! But then Perplexity told me that, in fact, a normal year is 52 weeks, and a leap year is 52 weeks and one day – before the two sentences are directly contradictory. Here is the full answer; Just try to understand it:
A normal year has about 52 weeks, while a leap year has 52 weeks and 1 day. More precisely, a regular year actually consists of 52.143 weeks, which means that there is an extra day in the normal year. On the other hand, a leap year, which occurs every four years, except for certain exceptions, has 52 weeks and 2 days. This difference in the number of weeks is due to the extra day in a normal year and the extra day in February during a leap year.
After doing some more research, I now believe the answer is what You.com said. But it all took too long, and forcing me to fact-check my findings defeats the purpose of summarizing things for me. Google continues to win here on one thing and one thing only: speed.
There is a subgenre of information questions in which the exact opposite is true. I call them buried information questions. The best example I can offer is the very popular question, “How to take a screenshot on a Mac”. There are a million pages on the Internet that have the answer — just Cmd-Shift-3 to take up the entire screen or Cmd-Shift-4 to get to a selection, you’re welcome — but this information is usually They are buried under it. Lots of ads and SEO crap. All the AI tools I tried, including Google’s own search generative experiment, just grab that information and give it to you directly. it’s awesome!
Does it have complex questions, which threaten the business model and structure of the web? Yes! But as a pure search experience, it’s much better. I’ve gotten similar results asking about ingredient substitutions, coffee ratios, headphone waterproofing ratings, and any other information that’s easy to know and yet very hard to find.
This brings me to the third type of Google search: the exploratory query. These are questions that do not have a single answer, but are the beginning of a learning process. On the most popular list, things like “how to tie a tie,” “why chains were invented,” and “what is a tick-tock” are among the quizzes. If you’ve ever Googled the name of a musician you’ve just heard of, or searched for things like “things to do in Helena Montana” or “NASA history,” you’re looking for are According to the rankings, these are not the main things people use Google for. But these are the moments when AI search engines can shine.
Like, wait: why? were Who invented chainsaws? Copilot gave me a multi-faceted answer about their medical origins, before describing their technological evolution and adoption by lumberjacks. It also gave me eight pretty useful links for further reading. Trouble gave me a very short answer, but it included some cool pictures of old chains and a link to a YouTube explainer on the subject. The Google results included many similar links, but none summed it up for me. Even his creative search only provided me with the basics.
My favorite thing about AI engines is references. Perplexity, You.com, and others are slowly getting better at linking to their sources, often inline, which means that if I see a particular fact that piques my interest, I go from there. Can go straight to the source. They don’t always offer enough sources, or put them in the right places, but it’s a nice and helpful trend.
One experience I had during these tests was actually the most eye-opening. The single most searched question on Google is a simple question: “What to watch.” Google has a whole dedicated page design for this, with rows of posters with “top picks” like TELA: PART TWO And fictitious; “For you” includes for me. Deadpool And Stop and catch fire; And then options sorted by popular topics and genre. None of the AI search engines did: Copilot listed five popular movies. Perplexity offered a seemingly random smattering of options Girls 5 Eva To Manhunt To The Shogun; You.com gave me a bunch of old information and suggested I watch “14 Best Netflix Original Movies” without telling me what they were.
AI is the right idea but chatbot is the wrong interface.
In this case, AI is the right idea — I don’t want a bunch of links, I want an answer to my question — but a chatbot is the wrong interface. So is the search results page, for that matter! Google, obviously aware that this is the most asked question on the platform, has managed to design something that works much better.
In a way, it is a perfect summary of the state of things. At least for some web searches, generative AI may be a better tool than the search tech of decades past. But modern search engines aren’t just pages of links. They are more like miniature operating systems. They can answer questions directly, they have calculators and converters and flight pickers and all kinds of other tools, they can get you there with just a click or two. The purpose of most search queries, according to these charts, is not to start a journey of information wonder and discovery. The goal is to get a link or answer, and then exit. Right now, these LLM-based systems are too slow to compete.
I think the big question is less about the tech and more about the product. Everyone, including Google, believes that AI can help search engines understand queries and process information better. This is currently a given in the industry. But if Google changes its results pages, its business model, and the way it presents and summarizes information, the faster AI companies can transform their chatbots into more complex, multifaceted tools. ? Ten Blue Links isn’t the answer to a search, but neither is an all-purpose text box. Search is everything, and everything is search. It will take a lot longer than a chatbot to kill Google.