A large majority of American voters are skeptical of the argument that the U.S. should proceed to build ever more powerful artificial intelligence, without domestic regulations, in an effort to compete with China. , according to new polling shared exclusively with TIME.
The results show that American voters do not agree with a common narrative imposed by the tech industry, in which CEOs and lobbyists have repeatedly argued that the U.S. should not benefit its geopolitical rivals. So one should tread carefully with AI regulation. And they show a startling level of bipartisan consensus on AI policy, with both Republicans and Democrats supporting the government in favor of safety and national security while placing some limits on AI development.
According to the survey, 75% of Democrats and 75% of Republicans believe that “taking a carefully controlled approach” to AI — preventing the release of tools that terrorists and foreign adversaries could use against the U.S. — ” Better to go ahead and be the first country to get super-powerful AI as soon as possible.” A majority of voters support more stringent security practices at AI companies, and worry about the threat of China stealing their most powerful models, the poll found.
The poll was conducted in late June by the AI Policy Institute (AIPI), an American nonprofit that advocates a “more cautious path” in AI development. The results show that 50% of voters believe the US should use its advantage in the AI race to prevent any country from building powerful AI systems, by imposing “security restrictions and aggressive vetting requirements.” . In contrast, only 23 percent believe the U.S. should try to build powerful AI as soon as possible to overtake China and gain a decisive edge over Beijing.
Polling also suggests voters may be largely skeptical of “open source” AI, or the idea that tech companies should be allowed to release the source code of their powerful AI models. Some technologists argue that open-source AI encourages innovation and reduces the monopoly of the biggest tech companies. But others say that's a recipe for danger as AI systems grow more powerful and unpredictable.
“What I understand from the polling is that stopping AI development is not seen as an option,” says Daniel Coulson, AIPI's executive director. “But it's also considered dangerous to give free rein to industry. And so there's a desire for a third way. And when we put it to the polls — that third way, AI development is undercut with Godrail — the same. It's what people want so much.”
The survey also shows that 63% of US voters think it should be illegal to export powerful AI models to potential US adversaries like China, including 73% of Republicans and 59% of Democrats. Only 14% of voters disagree.
The survey interviewed a sample of 1,040 Americans, who were representative of the education level, gender, race and political party for which respondents cast their vote in the 2020 presidential election. The margin of error given for the results is 3.4% in both directions.
While there is still no comprehensive AI regulation in the US, the White House has encouraged various government agencies to self-regulate the technology where it falls within their current remit. That strategy appears to be threatened, however, by a recent Supreme Court decision that limits federal agencies' ability to apply broad brush-stroke rules set by Congress to specific, or new, situations. does.
“Congress is so slow to act that there's too much interest in devolving authority to existing agencies or a new agency to increase government accountability,” Colson says. “This [ruling] Definitely makes it difficult.”
Even if federal AI legislation doesn't seem likely anytime soon, let alone before the 2024 election, recent polling by AIPI and others suggests that voters aren't as polarized on AI as they think the nation is. There are other issues. An early AIPI poll found that 75% of Democrats and 80% of Republicans believe that U.S. AI policy should seek to prevent AI from immediately reaching superhuman abilities. The polls also showed that 83% of Americans believe that AI could accidentally cause a catastrophic event, and 82% prefer to slow AI development for that risk, compared to Only 8% want to see it faster.