On Wednesday, OpenAI’s former chief scientist Ilya Sotskiver announced that he is forming a new company called Safe Superintelligence Inc. (SSI) with the goal of creating “superintelligence” safely. Which is a hypothetical form of artificial intelligence that surpasses human intelligence. extremely
We will pursue secure superintelligence in one direct shot, with one focus, one goal, and one product,” Sutskever wrote on X. We’ll do it through revolutionary breakthroughs developed by a small fractured team. will do”
Sutskever was a founding member of OpenAI and previously served as the company’s chief scientist. Two others are initially joining Sutskever at SSI: Daniel Levy, formerly head of the optimization team at OpenAI, and Daniel Gross, an AI investor who worked on machine learning projects at Apple between 2013 and 2017. What did website
Sutskever and several of his colleagues resigned from OpenAI in May, after Sutskever played a key role in ousting OpenAI CEO Sam Altman, who later returned. While Suitscure did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well on his new venture—OpenAI’s super Another resigned member of the alignment team, John Lake, complained publicly that “the safety culture and practices over the years have [had] Stepped back for shiny products in OpenAI. Leike joined OpenAI competitor Anthropic in late May.
A disturbing concept
OpenAI is currently trying to create AGI, or Artificial General Intelligence, which hypothetically matches human intelligence in performing a variety of tasks without specific training. Suitskewer hopes to outrun him in an attempt to go straight to the moon, with no distractions along the way.
This company is unique in that its first product will be secure superintelligence, and until then it won’t do anything else,” Suitscure said in an interview with Bloomberg. It will be completely immune to the external pressures of dealing with a large and complex product and being caught in a competitive rat race.
During his previous job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, which is sometimes Called “ASI” for “Artificial Super Intelligence”, so to speak. humanity
As you can imagine, it’s hard to align something that doesn’t exist, so Suitscure’s quest is sometimes met with skepticism. On X, University of Washington computer science professor (and frequent critic of OpenAI) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.”
Like most AGI, superintelligence is a vague term. Because the mechanics of human intelligence are not yet well understood—and because human intelligence is difficult to quantify or define because there is no single type of human intelligence—superintelligence can be difficult to identify when it comes to intelligence. can
Already, computers have surpassed humans in many forms of information processing (like basic math), but are they super-intelligent? Many proponents of superintelligence envision a sci-fi scenario of an “alien intelligence” with a form of sentience that operates independently of humans, and more or less the same thing that a suitor could safely achieve. And hopes to control.
You’re talking about a large superdata center that is autonomously developing the technology,” he told Bloomberg. “It’s crazy, right? It’s his safety that we want to contribute to.”