Sam Altman, Chief Executive Officer of OpenAI, at the HOPE Global Forums Annual Meeting on Monday, December 11, 2023 in Atlanta, Georgia, US.
Dustin Chambers | Bloomberg | Getty Images
“I believe America is going to be fine, no matter what happens in this election, I believe AI is going to be fine, no matter what happens in this election, and we’re going to have to work really hard to build that. ” Altman said during a Bloomberg House interview at the World Economic Forum in Davos this week.
Trump won a landslide in the Iowa Republican caucuses on Monday, setting a new record for the Iowa race with a 30-point lead over his nearest rival.
“I think part of the problem is that we’re saying, ‘Now we’re faced with, you know, it never occurred to us that what he’s saying is resonating with so many people and Now, suddenly, after his performance in Iowa, oh man.’ It’s a DeVos-like job,” Altman said.
“I think there’s been a real failure to learn the lesson about what works for the citizens of the United States and what doesn’t.”
Part of what has propelled leaders like Trump to power is a working-class electorate that resents feeling left behind, with advances in tech widening that divide. When asked if there was a risk that AI would cause more harm, Altman replied, “Yeah, definitely.”
“It’s bigger than just a technological revolution … and so it’s going to be a social issue, a political issue. It already is in some ways.”
As voters in more than 50 countries, accounting for half the world’s population, go to the polls in 2024, OpenAI this week released new guidelines on how to use its chatbots, How to avoid misuse of its popular generative AI tools including ChatGPT Also DALL·E 3, which creates original images.
“As we prepare for the 2024 elections in the world’s largest democracies, our vision is to secure the platform by increasing accurate voting information, implementing measured policies, and improving transparency. Work must continue,” the San Francisco-based company wrote in a blog post. Monday.
The beefed-up guidelines include cryptographic watermarks on images generated by DALL·E 3, as well as a complete ban on the use of ChatGPT in political campaigns.
“A lot of these are things we’ve been doing for a long time, and we have a release from the security systems team that not only has moderation, but we’re actually able to leverage our tools. . Scale our implementation, which gives us a significant advantage, I think,” said Anna Makanjo, vice president of global affairs at OpenAI, on the same panel as Altman.
These measures are aimed at preventing a repeat of past disruption of key political elections through the use of technology, such as the Cambridge Analytica scandal in 2018.
Reporting by The Guardian and elsewhere has revealed that the controversial political consultancy, which worked for the Trump campaign in the 2016 US presidential election, collected data on millions of people to influence the election.
Asked about OpenAI’s steps to ensure its technology isn’t being used to manipulate elections, Altman said the company is “very focused” on the issue. , and has “a lot of trouble” about getting it right.
“I think our role is very different from the role of a distribution platform” like a social media site or a news publisher, he said. “We have to work with them, so it’s like you produce here and you distribute here. And there needs to be good communication between them.”
However, Altman added that he is less concerned about the dangers of artificial intelligence being used to manipulate the election process than in previous election cycles.
“I don’t think it’s going to be the same. I think it’s always a mistake to try to fight the last battle, but we need to take some of that away,” he said.
“I think it would be terrible if I said, ‘Oh yeah, I’m not worried. I feel great.’ As such, we will have to watch relatively closely this year. [with] Super strict supervision [and] Very harsh opinion.”
While Altman isn’t concerned about the possible consequences of the US election for AI, the shape of any new government will be critical to how the technology is ultimately regulated.
Last year, President Joe Biden signed an executive order on AI, emphasizing new standards for safety and security, protecting the privacy of American citizens, and advancing equality and civil rights.
One thing that many AI ethicists and regulators worry about is the potential for AI to worsen social and economic disparities, especially as the technology embodies many of the same biases that humans have. has been done