'Lack of trust': How deepfakes and AI could rock US elections US Election 2024 News

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

On January 21, Patricia Gingrich was about to sit down to dinner when her landline phone rang. A New Hampshire voter picked up and heard a voice telling him not to vote in the upcoming presidential primary.

“As soon as I heard it, I thought, Gosh, that sounds like Joe Biden,” Gingrich told Al Jazeera. “But the fact that he was saying save your vote, don't use it in the next election — I knew Joe Biden would never say that.”

The voice may have sounded like the President of the United States, but it wasn't: it was a deep fake generated by artificial intelligence (AI).

Experts warn that deepfakes — audio, video or images created using AI tools with the intent to mislead — pose a huge threat to American voters ahead of November's general election, not just in the race. By adding content but also by destroying public trust.

Gingrich said she was not a victim of the Biden deepfake, but she worries it may have suppressed voter turnout. The message reached nearly 5,000 New Hampshire voters just days before the state primary.

“It could be bad for people who aren't as informed about what's going on with the Democrats,” said Gingrich, chairman of the Barrington Democratic Committee in Burlington, New Hampshire.

“If they really thought they shouldn't vote for something and Joe Biden was telling them not to, they probably wouldn't have voted.”

US President Joe Biden's voice spoofed in robocall sent to New Hampshire primary voters [Leah Millis/Reuters]

Online groups are vulnerable.

The Biden call wasn't the only deepfake so far this election cycle. Before ending his presidential bid, Florida Gov. Ron DeSantis' campaign shared a video featuring AI-generated images of Donald Trump hugging immunologist Anthony Fauci — two figures who have been instrumental in the COVID-19 pandemic. She was in public confrontation during her illness.

And in September, a different robocall went out to 300 voters participating in South Carolina's Republican primary. This time, recipients heard an AI-generated voice imitating Senator Lindsey Graham asking who they were voting for.

The practice of altering or falsifying content—especially for political gain—has existed since the dawn of American politics. Even the nation's first president, George Washington, had to contend with a series of “forgery letters” purporting to question America's cause for independence.

But AI tools are now advanced enough to quickly and cheaply replicate people's persuasion, increasing the risk of misinformation.

A study published earlier this year by researchers at George Washington University predicted that daily “AI attacks” would increase by mid-2024, which could pose a threat to November's general election.

Neil Johnson, lead author of the study, told Al Jazeera that the biggest threat comes not from recent, clearly fake robocalls – which contain eyebrow-raising messages – but from more persuasive deepfakes.

“These would be important images, altered images, not completely fake information because fake information attracts the attention of disinformation checkers,” Johnson said.

The study shows that online communities are interconnected in a way that allows bad actors to send large amounts of manipulated media directly into the mainstream.

Communities in swing states can be particularly vulnerable, as can parenting groups on platforms like Facebook.

“The role of parenting communities is going to be huge,” Johnson said, pointing to the rapid spread of vaccine misinformation during a pandemic as an example.

“I think we're going to have a sudden wave. [disinformation] – Many things that are not fake, are not false, but they stretch the truth.”

An AI-generated photo released by the Ron DeSantis campaign shows Donald Trump, left, right, hugging Anthony Fauci. [Leah Millis/Reuters]

Losing public trust

However, voters themselves are not the only targets of deepfakes. Larry Norden, senior director of the Elections and Government Program at the Brennan Center for Justice, is working with election officials to help them identify fraudulent content.

For example, Norden said, bad actors could use AI tools to direct poll workers to close a polling place early, by manipulating the tone of their boss's voice or pretending to be a supervisor. By sending a message through the account of

He is teaching poll workers to protect themselves by verifying the messages they receive.

Norden emphasized that bad actors can create misleading content without AI. “The thing about AI is that it makes it easy to scale,” he said.

Just last year, Norden illustrated the capabilities of AI by creating a deepfake video of himself for a presentation about the dangers of the technology.

“It didn't take long,” Norden said, explaining that all he had to do was feed his previous TV interviews into an app.

His avatar wasn't perfect — his face was a little blurry, his voice a little clipped — but Norden noted that the AI ​​tools are improving rapidly. “Since we recorded it, the technology has gotten more sophisticated, and I think it's harder to tell.”

Technology alone is not the problem. As deepfakes become more common, the public will become more aware of them and more suspicious of the content they use.

This can erode public trust, making voters more likely to reject truthful information. Political figures can also use this doubt for their own purposes.

Legal scholars have called this phenomenon the “liar's advantage”: Concerns about deepfakes can make it easier for subjects of legitimate audio or video footage to claim the recordings are fake.

Norden points to Access Hollywood Audio, which came out before the 2016 election, as an example. In the clip, then-candidate Trump is heard talking about his interactions with women: “You can do anything. Catch them by the cat.”

The tape — which was very real — was seen as damaging to Trump's chances among female voters. But if similar audio were leaked today, Norden said a candidate could easily call it a fake. “It's easier for the public to reject this kind of thing than it was a few years ago.”

“One of the problems we have right now in America is a lack of trust, and that could make things worse,” Norden added.

Steve Kramer, center left, has been charged with 13 felony counts of voter suppression, as well as corruption for his involvement in a New Hampshire robocall. [Steven Senne/AP Photo, pool]

What can be done about deep faxes?

Although deepfakes are a growing concern in US elections, relatively few federal laws prohibit their use. The Federal Election Commission (FEC) has yet to ban deep faxing in elections, and bills are stalled in Congress.

Individual states are trying to fill this gap. According to a legislative tracker published by the consumer advocacy organization Public Citizen, 20 state laws have so far been enacted to control voter fraud.

Several other bills have passed in Hawaii, Louisiana and New Hampshire and are awaiting the governor's signature.

Norden said he was not surprised to see individual states act before Congress. “The states are supposed to be the laboratories of democracy, so it's proving true again: the states are acting first. We all know that it's really difficult to get anything passed in Congress,” he said. said

Voters and political organizations are also taking action. After Gingrich received a fake call from Biden in New Hampshire, she joined a lawsuit — led by the League of Women Voters — demanding accountability for the alleged fraud.

The source of the call turned out to be Steve Cramer, a political consultant who claimed his intention was to draw attention to the need to regulate AI in politics. Cramer also admitted to being behind robocalls in South Carolina, citing Senator Graham.

Cramer came out after NBC News revealed that he had ordered a magician to use publicly available software to create a deep imitation of Biden's voice.

According to the lawsuit, the deepfake took less than 20 minutes to create and cost just $1.

However, Cramer told CBS News that he received “$5m worth of exposure” for his efforts, which he hoped would allow AI regulations to “start to play themselves out or at least pay for themselves.” “.

“My intention was to make a difference,” he said.

Paul Carpenter, a New Orleans magician, said he was hired to do a deep fake of President Biden's voice. [Matthew Hinton/AP Photo]

Possibility of applying existing laws

But Kramer's case shows that existing laws can be used to curtail deepfakes.

For example, the Federal Communications Commission (FCC) ruled earlier this year (PDF) that voice emulation software is covered by the Telephone Consumer Protection Act of 1991 — and therefore not in most cases. is legal.

The commission ultimately recommended a $6 million fine against Cramer for the illegal robocalls.

The New Hampshire Department of Justice also charged Kramer with voter suppression and candidate impersonation, which could result in up to seven years in prison. Kramer has pleaded not guilty. He did not respond to Al Jazeera's request for comment.

Norden said it's significant that none of the laws Kramer is accused of breaking are specific to deepfakes. “The criminal charges against him have nothing to do with AI,” he said. “Those laws exist independently of the technology used.”

However, it is not so easy to apply these laws to bad actors who are not identifiable or who are located outside the United States.

“We know from intelligence agencies that they are already seeing China and Russia experimenting with these devices. And they expect to use them,” Norden said. “In that sense, you're not going to legislate to get out of this problem.”

Both Norden and Johnson believe the lack of regulation makes it more important for voters to inform themselves about deepfakes — and learn how to find accurate information.

As for Gingrich, she said she knows manipulative deepfakes will only become more common. She also feels that voters need to educate themselves about the risk.

His message to voters? “I would ask people to make sure they know they can vote.”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment