Permy Olson: If AI destroys democracy, we may never know

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

This year promises to be huge for electoral governance, with billions of people — or more than 40% of the world's population — able to vote in elections. But nearly five months into 2024, some government officials are quietly wondering why the growing threat of AI has not, apparently, gone away.

According to a recent Politico article, which cited “national security officials, tech company executives and outside watchdog groups,” even as voters in Indonesia and Pakistan went to the polls, they went viral. I see little evidence that faxes reduce election results. He said AI was not having the “massive impact” he had expected.

This is a painfully short-sighted view. Reason? Maybe AI is disrupting elections right now and we just don't know it.

The problem is that the authorities are looking for a Machiavellian version of the Balenciaga Pope. Remember the AI-generated photos of Pope Francis in a puffer jacket that went viral last year? This is what many now expect from generative AI tools, which can create human-like text, images and videos at scale.

So-called astroturfing was easy to identify when an array of bots were saying the same thing thousands of times. It's hard to catch someone saying the same thing, in a slightly different way, thousands of times, though. In a nutshell, that's what makes AI-powered disinformation so hard to detect, and why tech companies are shifting their focus to “a different kind of virality,” said Josh Lawson, head of election risk at MetaPlatforms Inc. need to move towards. and now advises social media firms as a director at the Aspen Institute, a think tank.

Don't forget the subtle power of words, he said. Much of the public conversation on AI has been about images and deepfakes, “whereas we can see that the bulk of persuasion campaigns can be text-based. That way you can scale operations without actually getting caught.” “

Meta's WhatsApp makes this possible thanks to its “Channels” feature, which can broadcast up to thousands.

Another problem is that AI tools are now widely used, with more than half of Americans and a quarter of Britons having tried them. This means that regular people—intentionally or not—can also create and share false information.

Metta is trying to solve this problem this month by labeling videos, images and audio on Facebook and Instagram as “Made with AI” — an approach that could prove fruitful if people unlabeled everything. Start taking it for real.

Another approach for Meta would be to focus on a platform where text is prevalent: WhatsApp. Already in 2018, a flood of misinformation spreading through messaging platforms in Brazil targeted Fernando Haddad of the Workers' Party. Supporters of presidential winner Jair Bolsonaro are reported to have funded mass targeting.

Metta can better combat its repetition — which would be AI on steroids — if it brings its WhatsApp policies in line with those of Instagram and Facebook, specifically banning content that interferes with the voting process. interferes.

Smoking guns are rare with AI tools thanks to their high prevalence and subtle effects. We should brace ourselves for more noise than signal as artificial content hits the Internet. That means tech companies and officials shouldn't be complacent about the lack of “mass impact” from AI on elections. Quite the opposite.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment