Experts battled over what could happen if deepfakes interfered with the 2024 election.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

It’s Election Day in Arizona, and elderly voters in Maricopa County are being told by phone that local polling places are closed due to threats from militia groups.

Meanwhile, in Miami, a wave of photos and videos on social media showed poll workers throwing ballots.

The phone calls in Arizona and the videos in Florida are “deepfakes” created with artificial intelligence tools. But by the time local and federal officials figure out what they’re dealing with, misinformation has gone viral across the country.

The simulation was part of a recent exercise in New York that brought together dozens of former U.S. and state officials, civil society leaders and technology company executives to rehearse for the 2024 election.

The results were sensational.

“It was disturbing for the people in the room to see how quickly a few of these threats can spiral out of control and really dominate an election cycle,” said Miles Taylor, a former senior Department of Homeland Security official. The official who helped organize it. Practice for the Washington-based nonprofit The Future US.

Dubbed “The Deepfake Dilemma,” the exercise illustrated how AI-powered tools threaten to turbocharge the spread of misinformation in an already polarized society, several participants told NBC News. And may sow the seeds of chaos in the 2024 elections. Rather than examining a single attack by a group or a hostile regime, the exercise explored a scenario in which both domestic and foreign actors launched disinformation, exploited rumors and captured political divisions.

Organizers and participants of the war game spoke exclusively to NBC News about how it played out.

He said it raises serious questions about whether federal and local officials — and the tech industry — can combat disinformation, both foreign and domestic, designed to undermine public confidence in election results. are ready for

Current U.S. officials say they share these concerns privately, and some state and local election agencies will be hard pressed to keep the election process on track.

The exercise underscored the uncertainty surrounding the role of federal and state agencies and tech firms in what is expected to be one of the most divisive elections in American history. Does the federal government have the ability to detect AI deepfakes? Should the White House or state election office publicly declare that a particular report is false?

Nick Penniman, CEO of Issue One, a bipartisan organization that promotes political reform and election integrity, said that unlike a natural disaster, in which government agencies operate through a central command, America’s decentralized election system is an uncharted territory. Who is in charge of entering the .

“Now, over the last few years, we’ve had to defend our elections in America from both domestic and foreign forces. We don’t have the infrastructure or the history to scale that because we’ve had to in the past. Never faced such severe threats,” said Pennyman, who participated in the exercise.

“We know that eventually a hurricane is going to hit our polls,” Penniman said. But in this practice, “because patterns of working together have not been established, few people understand how they should or should not coordinate with others.”

In a mock “White House Situation Room” around a long table, participants played assigned roles — including the directors of the FBI, CIA and Department of Homeland Security — and heard alarming reports from Arizona and Florida. Given a number of other unproven risks, mail-in ballots include a break in the postal processing center.

In conversations with tech companies, players who were “government officials” struggled to determine the facts, who was spreading the “deepfakes” and how government agencies should respond. (MSNBC anchor Alex Witt also took part in the exercise, playing the role of president of the National Association of Broadcasters.)

In practice, it was not initially clear that images And the video of poll workers throwing ballots in Miami was fake. The images went viral, partly because of Russia’s bot-texting campaign.

Eventually, authorities were able to prove that the entire episode was staged and then enhanced by artificial intelligence to make it more convincing.

A woman walks past a “Vote Here” sign at City Hall in Miami Beach on Oct. 19, 2020.Eva Marie Uzcategui / AFP via Getty Images File

In this and other cases, including the bogus calls to Arizona voters, players hesitated over who to publicly announce to voters that their polling places were safe and their ballots were safe. Federal officials fear that any public statement will be seen as an attempt to boost President Joe Biden’s re-election chances.

“There was also a lot of debate and uncertainty about whether the White House and the president should engage,” Taylor said.

“One of the big debates in the room was whose job it is to say whether something is real or fake,” he said. “Is it state-level election officials who say we’ve determined someone is fraudulent? Is it private companies? Is it the White House?”

“That’s something we think we’re going to see in this election cycle as well,” Taylor said.

And although the war game envisioned tech executives in the room with federal officials, in reality, communications between the federal government and private firms about countering foreign propaganda and disinformation have declined sharply in recent years. .

The once-close cooperation between federal officials, tech companies and researchers that developed after the 2016 election has unraveled due to continued Republican attacks in Congress. And court decisions discourage federal agencies from consulting companies about moderating online content.

The result is a potentially dangerous gap in security for the 2024 elections.

Former officials and experts said state governments lack the resources to quickly detect or counter AI deepfakes, and now technology companies and some federal agencies are wary of playing a major role.

“Everybody’s afraid of lawsuits and … accusations of stifling free speech,” said former Pennsylvania Secretary of State Kathy Bockwar, who participated in the exercise.

Taylor said similar sessions are being held in other states besides the New York wargame, part of a broader effort to encourage more communication between tech executives and government officials.

But in the world outside of war games, social media platforms have cut teams that moderate inaccurate election content, and there’s no sign those companies are willing to cooperate closely with the government. are

Meanwhile, state and local election offices face a significant shortage of experienced staff. A wave of physical and cyber threats has led to a record exodus of election workers, leaving election agencies unprepared for November.

Concerned about understaffed and inexperienced state election agencies, a coalition of nonprofits and good government groups is planning to organize a bipartisan, nationwide network of former officials, technologists and others to help local officials. Help detect deep defects in real-time and respond with accurate information.

“We have to do our best — independent of the federal government and social media platforms — to try to fill that void,” said Penniman, whose organization is involved in election security efforts.

Bockwar, the former secretary of state, said she hopes nonprofits can act as a bridge between tech companies and the federal government, helping maintain communication channels.

Some of the biggest AI tech firms say they are introducing security measures for their products and are in talks with government officials to help bolster election security ahead of the November vote.

“Ahead of the upcoming elections, OpenAI has developed policies to prevent abuse, launched new features to increase transparency around AI-generated content, and connected people to authentic sources of voting information. have developed partnerships for,” said a spokesperson. “We continue to work with governments, industry partners, and civil society in our shared goal of protecting the integrity of elections around the world.”

However, the Internet is full of small generative-AI companies that may not adhere to the same principles as well as open-source tools that allow people to build their own generative-AI programs.

Voters cast their ballots at a polling place inside the Museum of Contemporary Art on November 8, 2022 in Arlington, Va.Nathan Howard/Getty Images File

An FBI spokeswoman declined to comment on the hypothetical situation, but said the bureau’s Foreign Influence Task Force is investigating “acts of foreign malign influence within the United States that target our democratic institutions and values.” “Federal leadership to identify, investigate and disrupt.”

The US Cybersecurity and Infrastructure Security Agency said it is working with state and local agencies to protect the nation’s elections.

“CISA is proud to stand shoulder-to-shoulder with state and local election officials as they defend our election process against cyber, physical and operational security threats, to prevent foreign influence,” said Senior Advisor Cait. The risk of acts of influence may be included.” Connelly

For many in the room for the exercise, the scenarios prompted the need to develop a vigorous public education campaign to help voters recognize the deep flaws and protect Americans from the coming onslaught of foreign and domestic misinformation. can be saved

The Future US and other groups are now in talks with Hollywood writers and producers to produce a series of public service videos to help raise awareness of fake video and audio clips during the election campaign. could .

But if public education campaigns and other efforts fail to curb the spread of misinformation and potential violence, the country could face an unprecedented stalemate over who wins the election.

If enough skepticism is raised about what happened during the election, there is a risk that the vote will result in a “deadlock” with no clear winner, said Danny Crichton of Lux Capital. , a venture capital firm focused on emerging technologies, which – hosted the exercise.

If enough things “go wrong or people get caught up in the polls, you just end up with a draw,” Crichton said. “And to me, that’s the worst-case scenario. … I don’t think our system is strong enough to handle it.”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment