OpenAI promised 20% of the computing power to combat the most dangerous types of AI—but never delivered.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

In July 2023, OpenAI unveiled a new team dedicated to ensuring that future AI systems that are more intelligent than all humans can be safely controlled. To signal how serious the company was about this goal, it publicly pledged to dedicate 20% of its then-available computing resources to the effort.

Now, less than a year later, that team, which was called SuperAlignment, has been disbanded amid staff resignations and allegations that OpenAI is prioritizing product launches over AI safety. According to a half-dozen sources familiar with the work of OpenAI's Superalignment team, OpenAI never followed through on its promise to provide 20% of its computing power to the team.

Instead, according to sources, the team repeatedly saw its requests for access to graphics processing units, requiring special computer chips to train and run AI applications, rejected by OpenAI's leadership, though The team's total compute budget has never come close to that. 20 percent limit promised.

The revelations raise questions about how serious OpenAI was about honoring its public promise, and whether the company's other public promises should be trusted. OpenAI did not respond to requests for comment for this story.

The company is currently facing backlash over its use of a voice for its AI speech generation features that is similar to that of actress Scarlett Johansson. In this case, questions have been raised about the credibility of OpenAI's public statements that the similarity between the voice of the AI ​​called “Sky” and Johansson's voice is purely coincidental. Johansen says OpenAI cofounder and CEO Sam Altman approached him last September, when Sky Voice first debuted, asking for permission to use his voice. He refused. And she says Altman asked permission to use her voice again last week, ahead of a close-up demonstration of her latest GPT-4o model, which used Sky Voice. OpenAI has denied using Johansson's voice without her permission, saying it paid a professional actress to create Sky, whose name it says it cannot legally disclose. Sakti But Johansson's claims have now drawn skepticism — with some speculating on social media that OpenAI actually cloned Johansson's voice or perhaps mixed another actress's voice with Johansson's. To make a sky.

OpenAI's SuperAlignment team was formed under the leadership of Ilya Sotskiur, OpenAI's co-founder and former chief scientist, who announced his departure from the company last week. Jan Leike, a long-time OpenAI researcher, co-led the team. He announced his resignation on Friday, two days after Sutskewer's departure. The company then told the team's remaining employees — numbering about 25 — that it was being terminated and reassigned within the company.

It was a swift fall for a team whose work OpenAI had hailed less than a year earlier as vital to the company and vital to the future of civilization. Superintelligence is the idea of ​​the future, a hypothetical AI system that will be smarter than all humans combined. It's a technology that goes beyond artificial general intelligence, or AGI—the company's stated goal of creating a single AI system that's as smart as any human.

Superintelligence, the company said when announcing the team that trying to kill or enslave people could pose a threat to humanity. “We probably don't have a solution for running and controlling super-intelligent AI, and preventing it from going rogue,” OpenAI said in its announcement. The Superalignment team had to research these solutions.

It was such an important task that the company said in its announcement that it had committed “20% of the compute we've saved to date over the next four years” to the effort.

But a half-dozen sources familiar with the Superalignment team's work said the group was never given the computation. Instead, it received much less of the company's regular compute allocation budget, which is reviewed quarterly.

A source familiar with the work of the Superalignment team said there was never a clear metric on how to calculate the 20% amount, leaving it open to wide interpretation. For example, the source said the team was never told whether the promise meant “20% per year for four years” or “5% a year for four years” or some variable amount that was ” 1% or 2%”. The first three years, and then the bulk of the commitment in the fourth year.” In any case, all sources good fortune spoke for the story which confirmed that the Superalignment team had not been given anything close to 20% of OpenAI's secure compute until July 2023.

OpenAI researchers can also make requests for what's called “flex” compute — access to extra GPU capacity that's budgeted for — to tackle new projects between quarterly budget meetings. But these sources said flex requests from the super-alignment team were routinely turned down by higher-ups.

Bob McGrew, OpenAI's vice president of research, was the executive who informed the team that the requests were being rejected, but other people at the company, including chief technology officer Meera Murthy, were involved in the decision, the sources said. were included. Neither McGrew nor Murthy responded to requests for comment for this story.

While the team did some research — it released a paper in December 2023 detailing its experiments in successfully getting a less powerful AI model to control a more powerful one — the lack of compute limited the team's more Ambitious ideas were shelved, the source said.

After resigning, Leak published a series of posts on X (formerly Twitter) on Friday in which he criticized his former employer, saying that “safety culture and processes have overtaken shiny products. ” He also said, “For the last few months my team has been sailing against the wind. At times we were struggling to calculate and it was becoming more and more difficult to complete this important research.”

Five sources familiar with the Superalignment team's work supported Leike's account, saying the compute access problems were exacerbated by a pre-Thanksgiving showdown between Altman and the board of the OpenAI nonprofit foundation.

Sutskewer, who was on the board, voted to fire Altman, and was the person the board chose to break the news to Altman. While OpenAI staff revolted in response to the decision, Sutskever later posted on X that he “deeply regrets” his participation in Altman's firing. Ultimately, Altman was rehired and Sutskewer and several other board members involved in his firing resigned from the board. Suitsquer never returned to work at OpenAI after Altman was rehired, but didn't officially leave the company until last week.

One source contradicted other sources. good fortune Discussing the computer problems faced by the Super Alignment team, they foresaw Suitscure's participation in the failed coup, preventing the group from leaving.

Although there have been some reports that Suitscure continues to co-lead the Super Alignment team remotely, sources familiar with the team's work say that this is not the case and that Suitscure does not have access to the team's work and its Later he played no role in guiding the team. Thanksgiving Day.

With Suitscure gone, the SuperAlignment team lost the only person on the team who had enough political capital within the organization to successfully argue for its own compute distribution, sources said.

In addition to Leike and Sutskever, OpenAI has lost at least six other AI security researchers from different teams in recent months. One researcher, Daniel Cocotajello, told the news site Vox that he “slowly lost confidence in OpenAI's leadership and their ability to handle AGI responsibly, so I resigned.”

In response to Leike's comments, Altman and cofounder Greg Brockman, president of OpenAI, posted on X that they were “grateful to him. [Leike] For everything he has done for OpenAI. “We need to keep elevating our security practices to match the stakes of each new model,” the duo wrote.

He then went on to outline his vision for the company's approach to AI safety going forward, trying to develop a theoretical perspective on how to secure future, more powerful models. Instead, much emphasis will be placed on testing currently developing models. “We need a very tight feedback loop, rigorous testing, careful consideration at every step, world-class safety, and coordination of safety and capabilities,” wrote Brockman and Altman, adding that “empirical understanding is the way forward.” can help inform.”

People who were talked to. good fortune Did so anonymously, either because they said they feared losing their jobs, or because they feared losing personal equity in the company, or both. Employees who have left OpenAI have been forced to sign severance agreements that include a strict non-disparagement clause that states that if they publicly criticize the company, or if they Even recognizing the existence of this clause, the company can withdraw its personal equity. And employees have been told that anyone who refuses to sign a severance agreement will also forfeit their equity.

After Vox reported on the terms of the severance, Altman posted on X that he was unaware of the provision and was “genuinely embarrassed” by the fact. He said that OpenAI has never tried to enforce this clause and take back anyone's personal equity. He said the company is in the process of updating its exit paperwork to “fix” the issue and that any former employee who is concerned about the provisions in the exit paperwork should can contact him directly about and it will be changed.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment