I have been reviewing AI chatbots for 4 months now and they have rewired my mind.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

As much as I hate to admit it, I like to think of ChatGPT for myself.

Months of reviewing AI chatbots such as ChatGPT, Perplexity, Claude and Gemini have rewired my brain to increasingly rely on generative technology. As a reporter, I'm tasked with reviewing chatbots and evaluating them against CNET's experience-based testing standards. So instead of parsing Google search results to find the right answer, I've started using AIs as answer engines, even if they sometimes get it wrong. The time saved by getting a quick response is very valuable.

AI chatbots are a truly remarkable tool, especially for reporters. There are times in my reporting when I need to find information on a particular topic — for example, buried deep in a research paper from Macquarie University in Australia. An AI tool can find that exact fact in a vast ocean of information and give me what I'm looking for.

Yes, it is entirely possible for me to pull the paper myself and find through it the insight I am looking for. Or, I can just chat GPT it.

Yet, as remarkable as ChatGPT is, it can also be remarkably silly. Dealing with hallucinations — instances where an AI chatbot confidently gives an incorrect answer as if it's correct — is at least an annoyance. As a reporter, I need to be sure I'm dealing with the real facts, and the information ChatGPS presents can sometimes be completely wrong, so I have to be extra careful. . And whenever I think of a piece of information that seems questionable, it becomes a time-consuming task to dig deeper into Google or consult another source to see if it's true or not. Gone — whatever time I thought I was saving goes out the window. .

even then. AI greatly compresses the time it takes to do reporting — when it works.

Knowing what to buy is not the same as knowing how to use it.

What I didn't expect was how effective AI could be when I'm trying to figure out what to buy. Sure, I can buy the best headphones from Google in 2024, but what if my query is more specific? Like, which Dolby Atmos speaker should I buy that comes in white that can be wall mounted but isn't overkill? Of course, I could swim in a well of AVS forum posts and Reddit threads, or even ask CNET's resident home audio expert, Ty Pendlebury. Or I can chat gpt it.

At the same time, ChatGPT has to sift through its massive data to find the right answer. The problem is that it can be difficult to find the right answer when there are so many opinions floating around online. For example, when I asked ChatGPT how high I should mount the Dolby Atmos speakers, the chatbot suggested placing them 12 to 24 inches above my front or rear speakers. When I told Pendlebury about ChatGPT's response, he shook his head happily, instead telling me to move them as close to the ceiling as possible.

I went back to ChatGPT and asked, “Shouldn't they be mounted closer to the ceiling?” They started giving strange answers. He first said that, yes, Atmos speakers should be mounted closer to the ceiling. This is a general problem with AI in that it tends to agree more than disagree, which can sometimes reinforce your biases. But ChatGPT said the reason Atmos speakers are mounted closer to the ceiling is so the sound can then bounce off the ceiling and back to the listener. However, these wall-mounted speakers are usually angled to start toward the listener, not the ceiling. The logic here just wasn't adding up.

Powerful tools, when pointed correctly.

AI chatbots are, in a sense, easier to use than Google because a simple question gets a clear and straightforward answer. A regular Google search will require you to jump around the internet, read articles and forum posts to find the right answer for you. Letting technology do the neural synthesis for you is definitely awesome and a real time saver. But getting the most out of AI requires quick engineering, specific question- and quick-writing techniques that you use to help the creative AI tool find, specifically, what you're looking for.

The need for quick engineering is most notable in image generation. With all the different terms of service, the AI ​​image generators are tuned to not output anything that is offensive. An example is when I asked the Dall-E 3 to portray a woman on the beach, it was more conservative, not wanting to create an image that was too revealing. Even a basic edit, such as asking her to change her black bikini for a red one, was scrapped.

Image generators sometimes require you to talk about your query, almost tricking the AI ​​into doing what you want it to do. Inevitably, I find myself reading posts from other AI “artists” in Reddit threads about what kinds of hints can be found around filters. It doesn't help that the best AI “artists” try to hide their tips from others so they can't be imitated — anathema to real art, which is about sharing and developing techniques so that everyone can. Be better.

South Park was okay.

In a season 26 episode of South Park titled Deep Learning, there's a phrase that's embedded itself in my brain: “ChatGPT, man.” I have adopted this motto in my daily life now, much to the chagrin of my closest friends and family. Whenever they come to me with questions about tech or anything, I just yell, “ChatGPT, man.”

Is AI a precursor to the obsolescence of the human brain? Or is AI like a calculator, a tool that can take care of basic math with high precision to allow us to solve complex problems with greater speed and accuracy? Clearly, the calculator didn't end the field of math, but over-reliance on it may have led to the weakening of basic math skills over the past few generations. Still, given the utility of calculators, it's hard to imagine a world without them. And if information and thought calculators exist, it begs the question: Why ask me when you can ask a computer?

A good reason: As smart as AI chatbots are, they also make mistakes, so at the very least, be sure to double-check their work. I know I do.

Editors' note: CNET used an AI engine to help generate several dozen stories, labeled accordingly. The note you are reading is linked to articles that are related to the topic of AI but are entirely created by our expert editors and writers. For more, see our AI Policy.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment