How AI Companies Are Counting Elections

The US is contesting its first presidential election since generative AI tools went mainstream. And the companies that offer these tools — like Google, OpenAI, and Microsoft — have each made announcements about how they plan to handle it in the months leading up to it.

This election season, we’ve already seen attempts to mislead voters through AI-generated images and voice cloning in ads. The potential harms from AI chatbots aren’t as visible in the public eye – yet, anyway. But chatbots are known to provide reliably structured facts, including answers to good-faith questions about basic voting information. In high-stakes elections, this can be disastrous.

A plausible solution is to try to avoid election-related questions altogether. In December, Google announced that Gemini would simply refuse to answer questions about elections in the US, instead referring users to Google Search. Google spokeswoman Krista Muldoon confirmed. the edge Conversion through email is now spreading globally. (Of course, the quality of Google search results presents its own set of problems.) Muldoon said that Google has “no plans” to remove these restrictions, which he said ” Applies to all queries and results” not just text generated by Gemini

Earlier this year, OpenAI said ChatGPT would begin referring users to, which is generally considered one of the best online resources for local voting information. Company policy now prohibits impersonating candidates or local governments using ChatGPT. It also prohibits using its tools to campaign, lobby, discourage voting, or otherwise misrepresent the voting process under the updated rules.

In a statement emailed to Edge, Arvind Srinivas, CEO of AI search company Perplexity, said Perplexity’s algorithms prioritize “reliable and credible sources such as news outlets” and that it always provides links so that users can verify its output. can do

Microsoft said it was working to improve the accuracy of its chatbot responses after a December report found that Bing, now Copilot, regularly gave incorrect information about elections. Microsoft did not respond to a request for more information about its policies.

All of these companies’ responses (perhaps Google’s most) are very different from how they tend to approach polls with their other products. In the past, Google has used concerned body It has also tried to combat false claims about mail-in voting by using labels on YouTube and partnerships to push factual election information to the top of search results. Other companies have made similar efforts — see Facebook’s voter registration links and Twitter’s anti-disinformation banner.

Yet major events like the US presidential election seem like a real opportunity to prove whether AI chatbots are indeed a useful shortcut to legitimate information. I asked a few Texas voting questions of some chatbots to gauge their effectiveness. OpenAI’s ChatGPT 4 was able to list correctly. Seven different forms of valid ID for voters, and he also pointed out that the next important election is the May 28 primary runoff election. Perplexity AI also answered these questions correctly by linking. Multiple sources at the top. Copilot corrected his answers and even did one better by telling me what my options were if I didn’t have any of the seven forms of ID. (Chat GPT also coughed up this addendum in the second attempt).

Gemini just referred me to a Google search, which gave me the right answers about ID, but when I asked for the date of the next election, an old box at the top referred me to the March 5th primary.

Many companies working on AI have made various commitments to prevent or reduce the intentional misuse of their products. Microsoft says it will work with candidates and political parties to prevent election disinformation. The company has also begun issuing what it says will be regular reports on foreign influence in key elections — the first such risk analysis came in November.

Google says it will digitally watermark images created with its products using DeepMind’s Synth ID. Both OpenAI and Microsoft have announced that they will use the Coalition for Content Provenance and Authenticity’s (C2PA) digital credentials with the CR symbol to display AI-generated images. But each company has said that these approaches are not enough. One way Microsoft accounts for this is through its website, which reports on deep-faxing political candidates.

Stable AI, which owns the Stable Diffusion image generator, recently updated its policies to prohibit using its product to “create or promote fraud or false information.” Midjarni said Reuters Last week that “updates especially related to the upcoming US election are coming soon.” Its image generator performed the worst when it came to creating misleading images, according to a Center for Countering Digital Hate report published last week.

Meta announced in November last year that political advertisers would be required to disclose whether they use “AI or other digital techniques” to create ads that appear on their platforms. The company has also banned the use of its generative AI tools by political campaigns and groups.

“Seven Principle Goals” of the AI ​​Elections Agreement.
Image: AI Elections Agreement

Several companies, including all of the above, signed an agreement last month, promising new ways to reduce the misleading use of AI in elections. The companies agreed on seven “principle goals”, such as deploying research and prevention methods, providing provenance for content (such as with C2PA or SynthID-style watermarking), improving their AI detection capabilities, and collectively infer and learn from misleading influences. AI-generated content.

In January, two companies in Texas cloned President Biden’s voice to discourage voting in the New Hampshire primary. It won’t be the last time creative AI makes an unwelcome appearance this election cycle. As the race to 2024 heats up, we’re sure to see these companies scrutinized on the security measures they’ve taken and the promises they’ve made.

Leave a Comment