Politicians around the world are calling on AI to deflect accusations.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Artificial intelligence experts have long warned that AI-generated content could muddy the waters of perceived reality. Weeks into a crucial election year, AI confusion is at its peak.

Politicians around the world are shrugging off potentially damaging pieces of evidence — grainy video footage of hotel raids, audio recordings criticizing political opponents — by calling them AI-generated fakes.

Last month, former President Donald Trump rejected an ad on Fox News that featured a video of his well-documented public gaffes — including his struggle to pronounce the word “anonymous” in Montana and a California town. Included was his visit to “Pleasure” aka Paradise. Both in 2018 – claiming the footage was generated by AI.

“The spoilers and losers at the failed and once defunct Lincoln Project, and others, are using AI (artificial intelligence) in their fake television ads to make me look as bad and pathetic as Crooked Joe Biden. Not an easy thing to do,” Trump wrote on True Social. “FoxNews should not be running these ads.”

The Lincoln Project, a political action committee formed by moderate Republicans to oppose Trump, quickly denied the claim. The ad featured events during Trump’s presidency that were widely covered at the time and seen in real life by many independent observers.

Still, AI creates a “false advantage,” said Hani Farid, a professor at the University of California at Berkeley who studies digital propaganda and disinformation. “When you actually catch a police officer or a politician saying something terrible, they have rebuttal” in the age of AI.

AI “destabilizes the concept of truth itself,” added Libby Lange, an analyst at the disinformation-tracking organization Graphica. “If everything can be faked, and if everyone is claiming that everything is fake or manipulated in some way, there’s really no sense of ground truth. It’s politically motivated. Actors, in particular, can take whatever interpretation they choose.

Trump is not alone in taking advantage. Around the world, AI is becoming a common scapegoat for politicians trying to fend off damaging accusations.

Late last year, a grainy video surfaced of a Taiwanese ruling party politician entering a hotel with a woman, suggesting he was having an affair. Commentators and other politicians quickly came to her defense, saying the footage was AI-generated – although it’s still unclear if it really was.

In April, a 26-second audio recording was leaked of a politician in the southern Indian state of Tamil Nadu accusing his own party of illegally raising $3.6 billion, Rest of the World reported. came. The politician denied the authenticity of the recording, calling it “machine-made”. Experts have said they are not sure if the audio is real or fake.

Confusion over AI also extends beyond politics. Last week, social media users began circulating an audio clip of what he claimed was a Baltimore County school principal’s racist rhetoric against Jewish people and black students. are The union representing the principals has said the audio is AI-generated.

Fareed, who analyzed the audio, said several signs point to this conclusion, including the same cadence of speech and signs of separation. But without knowing where it came from or the context in which it was recorded, he said, it’s impossible to say for sure.

On social media, commenters seem to believe the audio is real, and the school district says it has launched an investigation. A request for comment from the principal through his union was not returned.

These claims carry weight as AI deepfakes are now more common and better at mimicking a person’s voice and appearance. DeepFaxX regularly goes viral on Facebook and other social platforms. Meanwhile, tools and methods for identifying the AI-generated portion of media have not kept pace with rapid advances in AI’s ability to create such content.

Real fake photos of Trump have gone viral many times. Earlier this month, actor Mark Ruffalo posted AI photos of Trump with teenage girls, claiming the photos showed the former president on a private plane owned by sex offender Jeffrey Epstein. Ruffalo later apologized.

Trump, who spent weeks railing against AI on Truth Social, posted about the incident, saying, “This is AI, and it’s very dangerous to our country!”

Growing concern over the impact of AI on politics and the global economy was a major theme at a conference of world leaders and CEOs in Davos, Switzerland, last week. In her remarks at the opening of the conference, Swiss President Viola Emherd called AI-generated propaganda and lies “a real threat” to global stability, “especially today when the rapid development of artificial intelligence has led to such fake news. contributes to its growing reputation.”

Tech and social media companies say they are considering building systems to automatically check and moderate AI-generated content, but have yet to do so. Meanwhile, only experts have the technology and expertise to analyze a piece of media and determine whether it is genuine or fake.

Fewer than that are capable of truth-squading that can now be developed with easy-to-use AI tools available to almost anyone.

“You don’t have to be a computer scientist. You don’t have to be able to code,” Farid said. “Now there is no barrier to entry.”

Aviv Ovadia, an expert on the impact of AI on democracy and affiliated with Harvard University’s Berkman-Klein Center, said the general public is more familiar with AI deepfakes now than they were five years ago. As politicians see others avoid criticism by claiming that the evidence released against them is AI, more will make that claim.

“There is a contagion effect,” he said, noting a similar rise in politicians calling the election rigged.

Ovadya said technology companies have tools to manage the problem: They can watermark audio to create a digital fingerprint or join an alliance that aims to develop technical standards to prevent online fraud. Prevent the dissemination of information that establishes the origin of media content. Most importantly, he said, they can improve their algorithms so they don’t promote sensational but potentially false content.

So far, he said, tech companies have largely failed to take action to protect the public’s perception of reality.

“As long as the incentives continue to be engagement-driven sensationalism, and indeed controversy,” he said, “this is the kind of content — whether deepfake or not — that’s going to surface.”

Drew Harwell and Natasha Taco contributed to this report.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment