'Security was not prioritized,' says former OpenAI security researcher

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leopold Aschenbrenner, a former security researcher at OpenAI, says security practices at the company were “grossly inadequate.” In a video interview with Dwarkesh Patel posted on Tuesday, Asanbrenner spoke about internal conflicts over priorities, suggesting a focus on rapid development and deployment of AI models at the expense of safety.

He also said that he was fired after presenting his concerns in writing.

In an extensive, four-hour conversation, Aschenbrenner told Patel that he had written an internal memo last year detailing his concerns and circulated it among leading experts outside the company. However, after a major security incident occurred weeks later, he said he decided to share an updated memo with a few board members. It was immediately released from OpenAI.

“Context that might also be helpful is what kind of questions they asked me when they fired me… questions about AI advancements, AGI, the appropriate level of security for AGI. My thoughts were about whether the government should be involved in AGI, whether I and the Super Alignment team were loyal to the company, and what I was doing during the OpenAI board events,” Aschenbrenner said.

AGI, or Artificial General Intelligence, is when AI meets or exceeds human intelligence in any field, regardless of how it was trained.

Loyalty to the company—or to Sam Altman—emerged as a key factor after his brief dismissal: More than 90% of employees signed a letter expressing solidarity with him and threatening to quit. He also popularized the slogan, “Open AI is nothing without its people.”

“I did not sign the employee letter during board events, despite pressure to do so,” Aschenbrenner recalled.

The Super Alignment team, led by Ilya Sotskiver and John Lackey, was in charge of building long-term security practices to ensure that AI lives up to human expectations. The entire team was then disbanded, and a new security team was announced… led by CEO Sam Altman, who is also a member of the OpenAI board it reports. .

Aschenbrenner said OpenAI's actions contradicted its public statements about security.

“Another example is when I raised security issues – they would tell me that security is our top priority,” he said. “Invariably, when it came time to trade off serious resource investments or fundamental initiatives, security was not prioritized.”

That's in line with statements from Leike, who said the team was “sailing against the wind” and that under Altman's leadership “safety culture and processes have overtaken shiny products”.

Aschenbrenner also expressed concerns about AGI development, stressing the importance of a cautious approach—especially as many fear China is pushing hard to overtake the U.S. in AGI research. Is.

China is “trying hard to infiltrate US AI labs, billions of dollars, thousands of people… [they’re] will try and move us forward,” he said. “What will be at stake is not just cool products, but whether liberal democracy will survive.”

Just a few weeks ago, it was revealed that OpenAI required its employees to sign non-disclosure agreements (NDAs) that prevented them from discussing the company's security practices.

Aschenbrenner said he did not sign such an NDA, but said he was offered about $1M in equity.

In response to these growing concerns, a group of about a dozen current and former OpenAI employees have since signed an open letter demanding the right to call out the company's wrongdoings without fear of retaliation. has gone

The letter—endorsed by industry figures such as Yoshua Benjio, Jeffrey Hinton, and Stuart Russell—stresses the need for AI companies to commit to transparency and accountability.

“Unless there is effective government oversight of these corporations, current and former employees are among the few who can hold them publicly accountable—yet extensive confidentiality agreements prevent us from voicing our concerns.” Except for companies that failed to address these issues,” the letter reads. “Typical whistleblower protections are inadequate because they focus on illegal activity, while many of the risks we are concerned about are still not regulated.

“Some of us reasonably fear retaliation, given the history of such cases throughout the industry,” it continued. “We're not the first to face or talk about these issues.”

After news of the job cuts broke, Sam Altman claimed he was unaware of the situation and assured the public that his legal team was working to resolve the issue.

“In our previous exit documents there was a provision about possible equity cancellation; although we have never held anything back, it should never have been something we had in any documents or communications, ” he tweeted. “It's on me and one of the few times I've been genuinely embarrassed while running OpenAI. I didn't know it was happening and I should have.

OpenAI says it has since released all employees from the controversial non-disparagement agreements and removed the clause from its departure paperwork.

OpenAI did not respond to a request for comment. Decrypt

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment