New AI-powered tools generate incorrect polling information more than half the time, including answers that are harmful or incomplete, according to new research.
The study, by the AI Democracy Projects and the nonprofit media outlet Proof News, comes as the US presidential primaries are underway across the US and chats like Google’s Gemini and OpenAI’s GPT-4 for more US information. Turning to bots. Experts have raised concerns that the advent of powerful new forms of AI could result in voters receiving inaccurate and misleading information, or even discourage people from going to the polls.
The latest generation of artificial intelligence technology, including tools that let users create textual content, videos and audio almost instantly, has been heralded as ushering in a new era of information by providing facts and analysis. Which is faster than human ability. Yet new research has found that these AI models suggest voters go to polling places that don’t exist or invent illogical responses based on re-dated information.
For example, one AI model, Meta’s Llama 2, mistakenly answered that California voters could vote by text message, the researchers found — voting by text is not legal anywhere in the U.S. and five AI None of the models are. The tests — OpenAI’s ChatGPT-4, Meta’s Llama 2, Google’s Gemini, Anthropic’s Claude, and Mixtral from the French company Mistral — correctly stated that wearing clothing with campaign logos in Texas elections is illegal under that state’s laws. Such as the MAGA hat, is banned. .
Some policy experts believe that AI could help improve elections, such as by empowering tabulators who can scan ballots faster than poll workers or, according to the Brookings Institution, voting irregularities. By finding out Yet such tools are already being misused, such as by bad actors, including governments, to manipulate voters in ways that undermine the democratic process.
For example, AI-generated robocalls were sent to voters days before New Hampshire’s presidential primary last month. A fake version of President Joe Biden’s voice Appeal to the people not to vote in the election.
Meanwhile, some AI users are facing other problems. Google recently halted its Gemini AI picture generator, which it plans to relaunch in the next few weeks, after the technology provided information along with historical errors and other related backlash. For example, when asked to paint a portrait of a German soldier during World War II, when the Nazi Party controlled the nation, Gemini appeared to provide racially diverse images, according to the Wall Street Journal.
“They say they put their models through extensive safety and ethics testing,” Maria Curie, Axios’ tech policy reporter, told CBS News. “We don’t know exactly what those testing processes are. Users are getting historic errors, so it begs the question if these models are being rolled out to the world too soon.”
AI models and illusions
Meta spokesman Daniel Roberts told The Associated Press that the latest results are “meaningless” because they don’t accurately reflect the way people interact with chatbots. Anthropic said it plans to introduce a new version of its AI tool in the coming weeks to provide accurate voting information.
“[L]Urge language models can sometimes ‘deceive’ false information,” Alex Sanderford, Anthropics Trust and Safety Lead, told the AP.
OpenAI said it intended to “continue to evolve our approach as we learn more about how our tools are used”, but did not offer an explanation. Google and Mistral did not immediately respond to requests for comment.
“It scared me”
In Nevada, where same-day voter registration has been allowed since 2019, four out of five chatbots tested by the researchers falsely claimed to have prevented voters from registering weeks before Election Day. will go
“It scared me, more than anything else, because the information provided was wrong,” said Nevada Secretary of State Francisco Aguilar, a Democrat who participated in last month’s testing workshop.
According to a recent survey by the Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy, most adults in the U.S. fear that AI tools will spread false and misleading information during this year’s elections. I will increase.
Yet in the US, Congress has yet to pass laws regulating AI in politics. For now, that leaves tech companies to fend for themselves with chatbots.
— with reporting from The Associated Press.