Concerns are growing over the impact of AI on the 2024 elections.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

The rapid rise of artificial intelligence (AI) is raising concerns about how the technology could affect next year’s elections as the 2024 primary voting begins.

AI — advanced technology that can generate text, images and audio, and even deepfake videos — could fuel misinformation in an already polarized political landscape and undermine voter confidence in the country’s electoral system. Can finish more.

“2024 will be an AI election, just like 2016 or 2020 was a social media election,” said Ethan Bivino de Mesquita, interim dean of the University of Chicago Harris School of Public Policy. “We will all learn as a society about the ways in which this is changing our politics.”

Experts are sounding the alarm that AI chatbots could generate misleading information for voters if they use it to get information about ballots, calendars or polling places – and that AI could be overpowered. Can be used in a profane manner, to spread and disseminate false information and misinformation against certain candidates or issues. .

“I think it could be pretty dark,” said Lisa Bryant, chair of the political science department at California State University, Fresno, and an expert at MIT’s Election Lab.

Concerns about AI don’t just come from academics, polling shows: Americans seem increasingly worried about how the tech could confuse or complicate things during the controversial 2024 era.

A UChicago Harris/AP-NORC poll released in November found that a bipartisan majority of American adults are concerned about AI’s “increasing spread of misinformation” in the 2024 election.

A Morning Consult-Oxios survey found an increase in the share of American adults in recent months who said they think AI will negatively affect trust in candidate ads, as well as trust in election results overall.

About 6 in 10 respondents said they think misinformation spread by AI will have an impact on who ultimately wins the 2024 presidential race.

“They’re a very powerful tool for doing things like creating fake videos, fake pictures, etc., that look very convincing and are very hard to distinguish from the truth – and potentially become a tool in political campaigns. has been, and already has been,” said Bueno de Mesquita, who worked on the UChicago poll.

“It’s very likely that this is going to increase in the ’24 election — that we’re going to have fake content generated by AI at least by political campaigns, or at least by political action committees or other actors. is – that will influence voters’. The information environment makes it difficult to know what is true and what is false,” he said.

Over the summer, the DeSantis-affiliated super PAC Never Back Down reportedly used an AI-generated version of former President Trump’s voice in a television ad.

Just before the third Republican presidential debate, former President Trump’s campaign Released a video clip Joe appeared to introduce himself by Trump’s favorite nicknames, mimicking the voices of his fellow GOP candidates.

And earlier this month, the Trump campaign published an altered version of a report that NBC News’ Garrett Hawk gave before the third GOP debate. The clip begins unchanged with Haake’s report but gains a voiceover criticizing the former president’s Republican rivals.

“The risk is there, and I think it’s almost unthinkable that we won’t have deepfake videos or anything that’s moving forward in our politics,” Bueno de Mesquita said.

The use of AI, particularly by political campaigns, has prompted tech companies and government officials to consider regulations on tech.

Google announced earlier this year that it would require verified election advertisers to “prominently disclose” when their ads were digitally manipulated or altered.

Meta also plans to require disclosure when a political ad uses an “imaginary image or video, or audio with a realistic sound” that, among other purposes, depicts a real person doing or saying something. What he didn’t do.

President Biden issued an executive order on AI in October, which includes new standards for security and plans for the Commerce Department to develop guidelines on content authentication and watermarking.

“President Biden believes we have a responsibility to harness the power of AI for good while protecting people from its potentially profound dangers,” a senior administration official said at the time.

But lawmakers have been left largely scrambling to try to regulate the industry as it moves forward with new developments.

Shamyn Daniels, a Democratic congressional candidate in Pennsylvania, is using an AI-powered voice tool from startup Civox as a phone banking tool for her campaign.

“I share everyone’s deep concerns about the potential nefarious uses of AI in politics and elsewhere. But we need to understand and embrace the opportunities that this technology presents,” Daniels said. When he announced that his campaign would introduce the technology.

Experts say AI can be used for good in election cycles — such as telling the public which political candidates they can agree with on issues and helping election officials clean up voter rolls. In order to identify duplicate registrations.

But he also warned that tech could worsen the problems that emerged during the 2016 and 2020 cycles.

Bryant said AI could help “micro-target” users with misinformation even more than social media already does. No one is immune, she said, pointing to how ads on platforms like Instagram can already influence behavior.

“It’s really helped to take that misinformation and identify what kind of messages really resonate and work with individuals based on past online behavior,” he said.

Bueno de Mesquita said he is not as concerned about microtargeting than voter manipulation campaigns, because evidence shows that targeting social media is not effective enough to influence elections. Resources should focus on educating the public about the “information environment” and pointing them to authentic information, he said.

Nicole Schneiderman, a technology policy advocate at the nonprofit watchdog group Protect Democracy, said the organization does not expect AI to pose “new threats” to the 2024 election, but rather a potential acceleration of trends that undermine electoral integrity and democracy. are already impressing.

He said there is a danger of overestimating the potential of AI in the wider landscape of disinformation affecting elections.

“Certainly, technology can be used in creative and new ways, but the root cause of these applications are all the threats like disinformation campaigns or cyber attacks that we’ve seen before,” Schneiderman said. “We should focus on mitigation strategies that we know are responsive to threats that are increasing, as opposed to spending too much time trying to assess every technology use case.”

One key solution to dealing with rapidly evolving technology may be simply to expose consumers to it.

“The best way to make yourself AI literate is to spend half an hour playing with a chatbot,” said Bueno de Mesquita.

UChicago Harris/AP-NORC respondents who reported being more familiar with AI tools were also more likely to believe that using the tech could increase the spread of misinformation, suggesting that tech can Can raise awareness of its dangers.

“I think the good news is that we have both old and new strategies that really bring this forward,” Schneiderman said.

As AI becomes more sophisticated, detection technology may have difficulty keeping up despite investments in these tools, he said. Instead, he said “prebanking” by election officials could be effective in informing the public before they potentially see AI-generated content.

Schneiderman said she hopes election officials also quickly adopt digital signatures to tell reporters and the public what information is coming directly from an authentic source and what may be fake. These signatures can also be added to photos and videos that the candidate posts to plan for deep faxing, he said.

“Digital signatures are a proactive version of overcoming some of the challenges that artificial content can pose to the ecosystem’s ability to vote,” he said.

He said that election officials, political leaders and journalists can get the information people need about when and how to vote so that they are not confused and voter suppression is limited. Narratives about election interference are nothing new, he added, benefiting those fighting disinformation with AI content.

“One of the benefits we get from pre-banking is to develop effective counter-messaging that counters repeated misinformation narratives and hopefully gets in front of voters long before the election and ensures “Makes that message is getting through to voters. That they’re getting the authentic information they need,” Schneiderman said.

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment