Departed researcher says OpenAI puts 'shiny products' above security. Artificial Intelligence (AI)

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

A former senior OpenAI employee has said the company behind ChatGPT is prioritizing “flashy products” over security, revealing it reached a “breaking point” after disagreements over key objectives. He resigned after arriving.

Jan Leike was a lead security researcher at OpenAI who co-led its Super Alignment, ensuring that powerful artificial intelligence systems align with human values ​​and goals. His intervention comes ahead of next week's World Artificial Intelligence Summit in Seoul, where politicians, experts and tech executives will discuss monitoring the technology.

Leakey resigned days after the San Francisco-based company launched its latest AI model, the GPT-4o. His departure marks the departure of two of OpenAI's top security figures following the resignation this week of OpenAI co-founder and fellow co-head of SuperAlignment, Ilya Sotskiver.

Leike detailed the reasons for his departure in a thread on X posted on Friday, in which he said the culture of safety had become less of a priority.

“Over the years, safety culture and processes have overtaken flashy products,” he wrote.

Yesterday was my last day as Head of Alignment, Super Alignment Lead, and Executive @OpenAI.

— Jan Leike (@janleike) May 17, 2024

OpenAI was founded with the goal of ensuring that artificial general intelligence, which it describes as “AI systems that are generally smarter than humans,” benefits all of humanity. In his X posts, Leike said he had been at odds with OpenAI's leadership about the company's priorities for some time but that the standoff “finally came to a head.”

Leike said OpenAI, which also developed the Dall-E image generator and the Sora video generator, should invest more resources in issues such as safety, social impact, privacy and security for its next-generation models.

“These problems are quite difficult to correct, and I'm concerned that we're not on track to get there,” he wrote, adding that it was getting “harder and harder” for his team to conduct research. Is.

“Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is taking on a huge responsibility on behalf of all of humanity,” Leike wrote, adding that OpenAI “must become a safety-first AGI company”.

OpenAI Chief Executive Sam Altman responded to Leike's thread with a post on X thanking his former colleague for his contributions to the company's safety culture.

Skip the newsletter promotion of the past.

“That's right we have more to do. We are committed to doing it,” he wrote.

Sutskever, who was also OpenAI's chief scientist, wrote in his X post Announcing his departure that he believed OpenAI under its current leadership would “build AGI that is safe and beneficial”. Suitscure initially supported Altman's removal as OpenAI's boss last November, before backing his reinstatement after days of internal turmoil at the company.

Leike's warning came as a panel of international AI experts released an inaugural report on AI safety, which said there was disagreement over the potential for powerful AI systems to escape human control. However, he warned that regulators could be overtaken by rapid advances in technology, warning of a “potential disparity between the speed of technological progress and the speed of regulatory response”.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment