Google is turning into a label machine.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Updated June 21, 2024 at 11:35 a.m. ET.

A few weeks ago, I watched Google Search make what might have been the most expensive error in its history. In response to a question about cheating in chess, Google's new AI review told me that the young American player Hans Niemann “admitted to using an engine” after defeating Magnus Carlsen in 2022, or A chess-playing AI. Confession of cheating against the world's top-ranked player. Skepticism about the American game against Carlsen actually sparked controversy in September, reaching beyond the world of professional chess to mainstream news coverage and the attention of Elon Musk.

Except Neiman admits no such thing. Quite the contrary: He vigorously defended himself against the allegations, filing a $100 million defamation suit against Carlson and several others who accused him of fraud or made unsubstantiated accusations. Was punished. Chess.com, for example, banned Neiman from its website and tournaments. Although a judge threw out the case on procedural grounds, Neiman has been cleared of wrongdoing, and Carlson has agreed to play him again. But the awesomeness is still troubling: Neiman recently spoke of an “imperative and unwavering commitment” to silencing his haters, saying, “I'll be their everything for the rest of their lives. It's going to be a bigger nightmare.” Can he insist that Google and its AI are also willing to damage his reputation?

The error occurred when I was searching for an article I wrote about the controversy, which was cited by Google's AI. In it, I noted that Neiman admitted to using a chess engine in online games exactly twice, both times when he was very young. All Google had to do was make sense. But defamatory mentions are precisely the kinds of mistakes we should expect from AI models prone to “hallucinations”: inventing sources, misquoting quotes, rewriting events. Google's AI review also falsely claims that eating rocks can be healthy and that Barack Obama is Muslim. (Google repeated the mistake about Neiman's alleged fraud several times, and stopped doing so when I sent a request for comment to Google.) A company spokesperson told me that AI reviews “are sometimes presents information that does not provide full context and that the company works quickly to fix “instances of AI review not meeting our policies.”

Over the past few months, tech companies with billions of users have begun to incorporate generative AI into more and more consumer products, potentially enhancing the lives of billions of people. Chatbot answers are in Google Search, AI is coming to Siri, AI answers are on all Meta platforms, and all kinds of businesses are lining up to buy access to ChatGPT. In doing so, these corporations are breaking the long-held belief that they are platforms, not publishers. (Atlantic Ocean There is a corporate partnership with OpenAI. Editorial Division of Atlantic Ocean operates independently of the business division.) A traditional Google search or social media feed presents a long list of content produced by third parties, for which courts have found the platform to be legally responsible. do not have. Generative AI flips the equation: Google's AI review crawls the web like a traditional search, but then uses a language model to compose the results into an actual answer. I Not saying Neiman cheated against Carlson. Google did. In doing so, the search engine acted as both a speaker and a platform, or “splatform,” as legal scholars Margot E. Kaminsky and Mag Letta Jones have recently put it. It may only be a matter of time before an AI-generated lie about the Taylor Swift affair goes viral or a Google Wall Street analyst is accused of insider trading. If Swift, Neiman, or anyone else's life was ruined by a chatbot, who would they sue, and how? At least two such cases are already pending in the United States, and more are likely to follow.

Holding OpenAI, Google, Apple, or any other tech company legally and financially accountable for defamatory AI—that is, false statements about their AI products that damage someone's reputation—poses an existential threat to the technology. can But so far no one has been required to do so, and some of the established legal standards for suing a person or an organization for written defamation are, “When you're talking about AI systems , so you lead to dead ends,” Kaminsky, a professor who studies law and AI at the University of Colorado at Boulder, told me.

To win a defamation claim, one usually has to show that the defendant published false information that damaged their reputation, and prove that the misrepresentation was negligent or “actual malice.” was given with, depending on the situation. In other words, you have to establish the mental state of the accused. But “even the most sophisticated chatbots lack mental state,” Nina Brown, a communications law professor at Syracuse University, told me. “They cannot act recklessly. They cannot act recklessly. Arguably, they cannot even know that the information is false.

Even when tech companies talk about AI products as if they're actually intelligent, even human-like or creative, they're essentially data machines connected to the Internet—and they're flawed. . A corporation and its employees “are not actually directly involved in the production of the defamatory statement that creates the harm,” Brown said — presumably, no one at Google is directing the AI ​​to spread misinformation. , very few lies about a particular person or entity. They just built an unreliable product and put it in a search engine that was once, well, reliable.

One way forward might be to ignore Google altogether: if a human believes the information, that's their problem. Anyone who reads a false, AI-generated statement, does not verify it, and shares that information widely is liable and prosecuted under current defamation standards. Could be, Leslie Garfield Tanzer, a professor at the Elizabeth Hobb School of Law at Pace University, told me. A journalist who took Google's AI output and republished it could be liable for defamation, and with good reason if the misinformation would not have otherwise reached a wider audience. But such an approach cannot get to the root of the problem. In fact, defamation law “probably provides more protection to AI speech than human speech, because it's really, really hard to apply those questions of intent to an AI system that's being controlled by a corporation,” Kaminsky said. operated or manufactured,” Kaminsky said.

Another way to approach harmful AI outputs, Kaminsky notes, is to apply the obvious observation that chatbots are not people, but products made by corporations for general use—for There are several legal frameworks in place. Just as a car company can be held liable for faulty brakes that cause highway accidents, and just as Tesla has been sued for an alleged malfunction of its Autopilot, so can tech companies. They can be held liable for flaws in their chatbots that harm users, Eugene Volokh, a First Amendment–law professor at UCLA, told me. If a lawsuit shows a flaw in the chatbot's training data, algorithms, or security measures that made defamatory statements more likely to occur, and that a safer alternative existed, Brown said. That said, a company can be liable for releasing defamatory statements negligently or negligently. Hunting products. Whether a company has adequately warned users that their chatbot is untrustworthy can also be an issue.

Consider a current chatbot defamation case against Microsoft, which resembles a chess-cheating scenario: Jeffrey Battle, a veteran and aviation consultant, alleged that AI-powered answers at Bing He is said to have confessed to plotting a coup against him. United States Bing confused the battle with Jeffrey Leon Battle, who actually pleaded guilty to such a crime—a confrontation that, the complaint alleges, has hurt the consultant's business. To win, Battle may have to prove that Microsoft was negligent or negligent about the AI ​​lies — which, Volokh notes, may be easier because Battle claims that Microsoft was informed of the error and The company did not take timely action to fix it. . (Microsoft declined to comment on the matter.)

The product liability analogy is not the only way forward. Europe, Kaminsky noted, has taken a de-risking route: if tech companies are going to release high-risk AI systems, they need to properly assess and prevent that risk before doing so. If and how any of these approaches will apply to AI and hold up in court, in particular, litigation should be considered. But there are authorities. “Technology is moving too fast for the law,” Kaminsky said, and that every technological advance requires the law to be rewritten. It's not, and for AI to be insulting, “the framework should be substantially similar to existing law”, Volokh told me.

ChatGPT and Google Gemini may be new, but the industries rushing to implement them—pharmaceuticals and consulting and tech and energy—have a long history of mistrust, consumer protection, false claims, and breaking any other law. The case is being prosecuted. For example, the Federal Trade Commission has issued numerous warnings to tech companies about false advertising and privacy violations about AI products. “Your AI copilots are not God. In fact, for the foreseeable future, AI will remain more of an adjective than a noun,” an agency lawyer recently wrote. A.I There is a symmetry to artificial intelligence. Tool or Products. US law, in turn, has regulated the Internet for decades, and corporations for centuries.


The article originally stated that Google's AI Overview feature told users that chicken was safe to eat at 102 degrees Fahrenheit. The statement was based on a social media post and has since been removed.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment