Elections: AI supercharges disinformation risk in big year

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

LONDON (AP) — Artificial intelligence is supercharging election risk. Wrong information Around the world, a smartphone makes it easy for anyone with a warped imagination to create fake – but persuasive – content aimed at fooling voters.

This marks a quantum leap from a few years ago, when fake photos, videos or audio clips were required. Teams of people With time, technical expertise and money. Now, using free and low-cost artificial intelligence services from companies like Google and OpenAI, anyone can create high-quality “deep faxes” with just a simple text prompt.

A wave of AI deepfakes linked to elections in Europe and Asia have circulated through social media for months, acting as a warning. More than 50 countries are participating in the elections. this year.

“You don’t have to look far to see some people … clearly confused about whether something is real or not,” said Henry Ajder, of Cambridge, England-based Generative AI. A leading expert.

Ajdar, who runs a consulting firm called Latent Space Advisory, said the question is no longer whether AI deepfakes can influence elections, but how influential they will be.

As the US presidential race heats up, recently FBI Director Christopher Wray Warned of growing danger.saying that creative AI “makes it easier for foreign adversaries to engage in malicious influence.”

People are reflected in a hotel window on the Davos Promenade in Davos, Switzerland on January 15, 2024. (AP Photo/Marcus Schreiber, File)

With AI Deep Fax, Photograph of the candidate Can be dirty, or can be soft. Voters can be swayed toward or away from candidates – or even avoid elections altogether. But perhaps the biggest threat to democracy, experts say, is that the rise of AI deepfakes could erode public trust in what they see and hear.

Some recent examples of AI deepfakes include:

— A video of Moldova’s pro-Western president throwing his support behind a Russia-friendly political party.

– Audio clips of the leader of Slovakia’s Liberal Party discussing vote rigging and beer price hikes.

– A video of an opposition lawmaker in Bangladesh – a conservative Muslim-majority country – wearing a bikini.

The modernity and sophistication of the technology makes it difficult to find out who is behind the AI ​​deepfakes. Experts say that governments and companies are not yet capable of preventing floods and are not moving fast enough to solve the problem.

As technology improves, “it’s going to be harder to get definitive answers about a lot of fake content,” Ajdar said.

Termination of Trust

Some AI deepfakes aim to raise doubts about the candidates’ loyalty.

In the Eastern European country of Moldova, which borders Ukraine, pro-Western President Maia Sandu has been a frequent target. An AI deepfake that circulated shortly before local elections showed her supporting a Russian-friendly party and announcing plans to resign.

Moldovan President Maia Sandu, right, welcomes Ukrainian President Volodymyr Zelenskiy on June 1, 2023, in Balboa, Moldova. (AP Photo/Vadim Gharda, File)

Officials in Moldova believe that the Russian government is behind this activity. With the presidential election this year, the goal of deepfakes is “to destroy trust in our election process, candidates and institutions — but also to destroy trust between people,” said Olga Ruska, an adviser to Sandow. The Russian government declined to comment for this story.

China has also been accused of weaponizing generative AI for political purposes.

In Taiwan, an autonomous island claimed by China, an AI deepfake gained attention earlier this year by raising concerns about US interference in local politics.

A fake clip circulating on TikTok shows U.S. Rep. Rob Whitman, vice chairman of the U.S. House Armed Services Committee, pledging strong U.S. military support for Taiwan if the incumbent party’s nominees are elected in January. do

Rep. Rob Whitman, R-Va., questions witnesses during a congressional hearing on Capitol Hill, Tuesday, Feb. 28, 2023, in Washington. (AP Photo/Alex Brandon, File)

Whitman accused the Chinese Communist Party of trying to meddle in Taiwan’s politics, saying it uses TikTok — a Chinese-owned company — to spread “propaganda.”

Chinese Foreign Ministry spokesman Wang Wenbin said his government does not comment on fake videos and opposes interference in other countries’ internal affairs. The Taiwan election, he asserted, “is a local matter for China.”

Blurred reality

Audio-only deepfakes are particularly difficult to verify because, unlike images and videos, they do not contain telltale signs of manipulated content.

Days before parliamentary elections in Slovakia, another country under Russian influence, audio clips of what appeared to be the voice of the head of the Liberal Party were widely shared on social media. Clips allegedly caught him talking about beer price gouging and vote rigging.

Ajdar said it’s understandable that voters may be prone to fraud, since humans are “more used to judging with our eyes than our ears.”

In the US, robocalls impersonating US President Joe Biden urged New Hampshire voters to abstain from voting in the January primary election. The calls were later. Found out from a political consultant Who said he was trying to publicize the dangers of AI deepfakes.

Paul Carpenter explains AI software during an interview on Friday, Feb. 23, 2024, in New Orleans. Carpenter says he was hired in January to use AI software to imitate President Joe Biden’s voice to convince New Hampshire Democratic voters not to vote in the state’s presidential election. Basic (AP Photo/Matthew Hinton, File)

In poor countries, where media literacy lags behind, even low-quality AI fakes can be effective.

That’s what happened in Bangladesh last year, where opposition lawmaker Romain Farhana – a vocal critic of the ruling party – was shown wearing a bikini. The viral video sparked outrage in the conservative, majority-Muslim nation.

“They trust what they see on Facebook,” Farhana said.

Romain Farhana, a Bangladesh Nationalist Party (BNP) politician, poses for a photo during an interview at her residence on Thursday, Feb. 15, 2024, in Dhaka, Bangladesh.

Experts are particularly worried about the upcoming elections in India, the world’s largest democracy and social media platforms. A breeding ground for misinformation.

A challenge to democracy

Some political campaigns are using generative AI to improve their candidate’s image.

The team that raced in Indonesia. Prabowo Subianto’s presidential campaign Deployed a simple mobile app to build a deeper connection with supporters of the vast island nation. The app enabled voters to upload photos and create AI-generated images with Subianto.

As AI deepfakes continue to proliferate, authorities around the world are scrambling to come up with guardrails.

Nodi Valderino, digital coordinator for Indonesian presidential candidate Prabowo Subianto’s campaign team, shows the interface of a web application that allows supporters to upload photos to have AI-generated images of them with Subianto, Jakarta, Indonesia I, WEDNESDAY, FEB. 21, 2024. (AP Photo/Data Alengkara)

The European Union already requires social media platforms to reduce the risk of disinformation or “electoral manipulation”. It will mandate. Special labeling of AI Deep Fax Starting next year, it’s too late for EU parliamentary elections in June. Still, the rest of the world lags far behind.

The world’s biggest tech companies recently signed — and voluntarily — an agreement to prevent AI tools from disrupting elections. For example, the company that owns Instagram and Facebook has said it will. Start labeling Deepfakes that appear on its platform.

But Deepfax has a hard time reining in apps like the Telegram chat service, which didn’t sign a voluntary agreement and used encrypted chats that could be difficult to monitor.

Some experts worry that efforts to rein in AI deepfakes could have unintended consequences.

A banner advertising AI is placed on a building on the Davos Promenade next to the World Economic Forum on January 18, 2024 in Davos, Switzerland. (AP Photo/Marcus Schreiber, File)

Tim Harper, a senior policy analyst at the Center for Democracy and Technology in Washington, said that well-meaning governments or companies can sometimes tread a “very thin” line between political commentary and “an illegitimate attempt to discredit a candidate.”

Major generative AI services have rules to limit political misinformation. But experts say it’s easy to bypass the platforms’ restrictions or use alternative services that don’t have the same protections.

Even without bad intentions, the increasing use of AI is troubling. Many popular AI-driven Chatbots are still spewing out false and misleading information. This threatens to disenfranchise voters.

And software isn’t the only threat. Candidates may try to deceive voters by claiming that real events that portray them in an unfavorable light were generated by AI.

“A world in which everything is suspect — and therefore everyone has to choose who they believe — is also a world that is a A real challenge for a thriving democracy.”


Swenson reported from New York. Associated Press writers Julhas Alam in Dhaka, Bangladesh, Krutika Pathi in New Delhi, Huizhong Wu in Bangkok, Edna Tarigan in Jakarta, Indonesia, Dak Kong in Beijing, and Stephen McGrath in Bucharest, Romania contributed to this report.


The Associated Press is supported by a number of private foundations to expand its explanatory coverage of elections and democracy. Learn more about the APK Democracy initiative Here. AP is solely responsible for all content.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment