Why OpenAI is getting harder to trust

A composite portrait of OpenAI CEO Sam Altman, Edward Snowden, and former NSA chief Paul Nakasone.
Getty Images
WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

  • OpenAI appointed former NSA director Paul Nakasone to its board of directors.
  • Nakasone's hiring is intended to bolster AI security but raises surveillance concerns.
  • The company's internal security group has also been effectively dismantled.

Outside his office are terrifying undercover security guards. It has just appointed a former director of the NSA to its board. And its internal working group aimed at promoting the safe use of artificial intelligence is effectively defunct.

OpenAI is feeling a little less open every day.

In its latest eyebrow-raising move, the company said Friday it has appointed former NSA director Paul Nakasone to its board of directors.

In addition to leading the NSA, Nakasone was head of the Defense Department's cybersecurity division at the U.S. Cyber ​​Command. OpenAI says Nakasone's hiring represents its “commitment to safety and security” and emphasizes the importance of cybersecurity as AI continues to evolve.

“OpenAI's dedication to its mission aligns with my own values ​​and experience in public service,” Nakasone said in a statement. “I look forward to contributing to OpenAI's efforts to ensure artificial general intelligence is useful for people around the world.”

But critics worry that Nakasone's hiring may represent something else: oversight.

Edward Snowden, the US whistleblower who leaked classified documents about surveillance in 2013, said in a post on X that Nakasone's hiring was “a betrayal of the rights of every person on earth.”

“They are completely unmasked: never trust OpenAI or its products (ChatGPT etc.),” ​​Snowden wrote.

In another comment on X, Snowden said that “AI, along with the oceans of mass surveillance data that has been developing over the last two decades, is putting truly terrifying powers in the hands of an unaccountable few. Is.”

On the other hand, Sen. Mark Warner, a Democrat from Virginia and chairman of the Senate Intelligence Committee, called Nakasone's hiring a “huge success.”

“There's nobody in the security community, at large, that's more respected,” Warner told Axios.

OpenAI may require Nakasone's expertise in security, where critics worry that security issues could open it up to attacks.

OpenAI fired former board member Leopold Eschenbrenner in April after he sent a memo detailing a “major security incident.” He described the company's security as “grossly inadequate” to protect it from theft by foreign actors.

Shortly after, OpenAI's Super Alignment team — focused on developing AI systems aligned with human interests — abruptly disbanded after two of the company's key security researchers left.

John Lake, one of the departing researchers, said he “had disagreed with OpenAI's leadership about the company's core priorities for some time.”

Ilya Sotskiur, OpenAI's chief scientist who initially launched the SuperAlignment team, was more coy about his reasons for leaving. But company insiders said he was on shaky ground because of his role in the failed ouster of CEO Sam Altman. Suitskewer rejected Altman's aggressive approach to AI development, fueling their power struggle.

And if all that wasn't enough, even locals who live and work near OpenAI's office in San Francisco say the company is starting to rip them off. A cashier at a neighboring pet store told the San Francisco Standard that the office had a “secret atmosphere.”

Several workers at neighboring businesses say men resembling undercover security guards stand outside the building but won't say they work for OpenAI.

“[OpenAI] Not a bad neighbor,” said one. “But they're secretive.”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment