Brainless Answer – Columbia Journalism Review

Last spring, the press panicked. “Republicans criticize Biden's re-election bid in AI-generated ad.” Axis reported “Republicans counter Biden announcement with dystopian, AI-assisted video”. The Washington Post. Recently, the AP offered an analysis: “President Joe Biden's campaign and Democratic candidates are in a tight race with Republicans over who can best take advantage of the potential of artificial intelligence, a technology that could transform American elections.” can — and may threaten democracy itself.” Elsewhere, more stories have piled up about the terrifying prospect of AI-generated deepfakes, including, notably, a piece Financial Times Illustrating a video circulating in Bangladesh, created using HeyGen, a tool that can produce news-style clips with AI-generated avatars for less than twenty-four dollars a month. Comes. “Policymakers around the world are concerned about how AI-generated disinformation can be used to try to mislead voters and fuel division,” the story continued.

In much of the coverage, there's an undercurrent of fear—overtly expressed—that AI-generated deepfakes and hoaxes are incredibly realistic and completely convincing. But that may just be technological panic: Even if ubiquitous and cheap AI tools have made it easier than ever to generate disinformation, it's unclear whether AI is making much of a difference in politics. (In Bangladesh, Prime Minister Sheikh Hasina and her party were overwhelmingly re-elected.) For the press, focusing on AI's technological prowess may be a mistake.

Carl Miller—director of research at the Center for the Analysis of Social Media at Demos, a UK-based political think tank—told me that for the most part, there hasn't been an explosion of AI fake changes. People's thoughts. Many of us “have a pretty simplistic idea of ​​how influence operations work,” he said. People can imagine that bad actors will “spread persuasive but false pictures of the world to make them change their minds.” In effect, influence operations are designed to “get people to agree with their worldviews, flatter them, confirm them, and then try to implement them.”

That's why, according to Rene DeRista of the Stanford Internet Observatory, the most common type of AI-generated “chatbot” or fake account on X is known as “reply man.” Owner who only appears to echo a post. AI chatbots can sometimes create an “illusion of majority,” DiResta explained, giving the impression that a particular view is more common than it actually is. Through what she calls the “mechanics of influence,” modern social media become a hybrid of the old broadcast model and personal gossip networks, combining the reach of the former with the reciprocity of the latter.

This means that how realistic a deepfake can be — AI's value proposition — is not critical to what makes it convincing. (Actually, that's how slick a deepfake looks. Weak his reputation; As artificial media and AI expert Henry Ajder points out Atlantic Ocean, it is “far more effective to use a crude form of media manipulation, which can be done quickly and by less sophisticated actors.”) More important: From whom the misinformation comes, it How people feel, and whether it is consistent with their current beliefs. Miller said that this kind of influence is not about truth or facts, but about relationships. As he told me, “It's going to talk to them about the meaning of their lives, where they fit in the world. It's going to validate their grievances. It's all about identity and emotions and social connections. belongs to.

Meta has in the past taken action against large networks of fake accounts, at least some of which originated from China and Russia. AI can make building chatbot networks faster and easier. But Miller believes that a more powerful form of influence will be largely invisible—and therefore difficult to moderate—because it will occur in private groups and one-on-one interactions. “Maybe you'll recruit friendships on Facebook groups, but you can easily transfer that to live chat, whether it's on WhatsApp or Instagram or Signal,” he said.

Some argue that AI may actually happen help In the fight against misinformation: Yann Likon, a leading thinker in AI and a chief scientist at Meta, makes this argument. Wired recently; Five years ago, he said, about a quarter of all hate speech and misinformation removed by Facebook was identified by AI, and last year it was closer to 95 percent. However, Miller is not so confident that “any kind of automated model that we could deploy would reliably detect the generated image or text.” For now, platforms have created AI-transparency rules: Meta recently introduced “Made with AI” tags. YouTube requires creators to disclose when posting “realistic” content created with altered or synthetic media. Given the sheer amount of content that gets uploaded, though, these policies can be difficult to enforce — and seemingly impossible for operations to influence each other.

Perhaps the biggest danger—more than AI-generated deepfakes, or persuasive AI “friends”—is what some call “false profits,” by which a politician (or a bad actor) can make this claim. Take advantage of something they know is deepfake. It's not, gambling on the public's general distrust of online content. The fear that “everyone is going to spend the next year having their worldview destroyed and ruined by this avalanche of fake images,” Miller said, isn't necessarily meritorious. More fundamental than the technical problem is the human problem: that our minds will mold into our beliefs. That we're more likely to share messages because they're funny or anger-inducing than because they're true. That we will lose trust in almost any source of information.

Matthew Ingram is CJR's chief digital writer. Before that he was a senior writer. good fortune Magazine He has written about the relationship between media and technology since the early days of the commercial Internet. His writing has been published in The Washington Post And Financial Times Also by Reuters and Bloomberg.

Leave a Comment