If AI destroys democracy, we may never know.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

This year promises to be a huge one for electoral governance, with billions of people – or more than 40% of the world's population – able to vote in elections. But nearly five months into 2024, some government officials are quietly wondering why the growing threat of AI has not, apparently, gone away.

According to a recent Politico article, which cited “national security officials, tech company executives and outside watchdog groups,” even as voters in Indonesia and Pakistan went to the polls, they went viral. I see little evidence that faxes reduce election results. He said AI was not having the “massive impact” he had expected. Reason? Maybe AI is disrupting elections right now and we just don't know it.

The problem is that the authorities are looking for a Machiavellian version of the Balenciaga Pope.

Remember the AI-generated photos of Pope Francis in a puffer jacket that went viral last year? This is what many now expect from creative AI tools – which can combine human-like text, images and videos together, making it as easily searchable as the previous persuasion campaigns that led to Macedonia. had supported Donald Trump or disseminated divisive political content from Russia on Twitter and Facebook; . So-called astroturfing was easy to identify when an array of bots were saying the same thing thousands of times.

It's hard to catch someone saying the same thing, in a slightly different way, thousands of times, though. In a nutshell, that's what makes AI-powered disinformation so hard to detect, and why tech companies are shifting their focus to “a different kind of virality,” said Josh Lawson, head of election risk at MetaPlatforms Inc. need to move towards. and now advises social media firms as a director at the Aspen Institute, a think tank.

Don't forget the subtle power of words, he said. Much of the public conversation on AI has been about images and deepfakes, “whereas we can see that the bulk of persuasion campaigns can be text-based. That way you can scale operations without actually getting caught.” “

Meta's WhatsApp makes this possible thanks to its “Channels” feature, which can broadcast up to thousands. For example, you could use an open-source language model to create and send legions of different text posts to Arabic speakers in Michigan, or message people at their local polling stations at school. I'm flooded and the voting will take six hours, adds Lawson.

Another problem is that AI tools are now widely used, with more than half of Americans and a quarter of Britons having tried them. This means that regular people – knowingly or not – can also create and share false information. In March, for example, fans of Donald Trump posted fake AI-generated photos of him surrounded by black supporters, to paint him as a hero to the black community.

“It's ordinary people who create fan content,” said Renee Derista, a researcher at the Stanford Internet Observatory. “Do they mean to be deceitful? Who knows?” Importantly, since the cost of distribution is already at zero, the cost of creation has also come down for everyone. (It doesn't help that Facebook is actively recommending AI-generated images — including weird images of Jesus superimposed on a giant crab — that get hundreds of millions of engagements, DiResta's According to a March research paper.)

What makes Meta's job particularly difficult to deal with is that it can't just try to limit certain photos from getting a lot of clicks and likes. AI spam does not require engagement to be effective. It just needs to flood the zone.

Meta is trying to solve this problem by labeling videos, photos and audio on Facebook and Instagram this month as “Made with AI” — an approach that could be fruitful if people were to post everything without labels. Start taking it for real.

Another approach for Meta would be to focus on a platform where text is prevalent: WhatsApp. Already in 2018, a flood of misinformation targeting the Workers' Party's Fernando Haddad spread through messaging platforms in Brazil. Supporters of presidential winner Jair Bolsonaro are reported to have funded mass targeting.

Metta can better combat this repeat – which would be AI on steroids – if it brings its WhatsApp policies in line with those of Instagram and Facebook, specifically banning content that violates the voting process. I intervene. WhatsApp's rules only vaguely ban content that is intentionally deceptive and “illegal activity.”

A Meta spokesman said that meant the company would “implement voter or electoral suppression.”

But clear content policies will give Meta more authority to tackle AI spam on WhatsApp channels. You need it “for proactive enforcement,” Lawson said. If the company didn't think that was the case, it wouldn't have more specific policies against voter interference for Facebook and Instagram.

Smoking guns are rare with AI tools thanks to their high prevalence and subtle effects. We should brace ourselves for more noise than signal as artificial content hits the Internet. That means tech companies and officials shouldn't be complacent about the lack of “mass impact” from AI on elections. Quite the opposite.


Use the form below to reset your password. When you submit your account email, we'll send an email with a reset code.

“Previous

Roy: Teacher's morale crash is here.

next”

Our view: Portland's new bishop has a chance to turn over a new leaf.
WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment