Whistleblowers say OpenAI illegally prevented staff from sharing risks.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

OpenAI's whistleblowers have filed a complaint with the Securities and Exchange Commission alleging that the artificial intelligence company prohibited its employees from warning regulators about the serious risks its technology poses to humanity. may have caused, has demanded an investigation.

According to a seven-page letter sent to the SEC commissioner earlier this month, whistleblowers said OpenAI issued overly restrictive employment, severance and nondisclosure agreements to its employees that led to There could have been fines against activists who raised concerns about OpenAI to federal regulators. formal complaint. The letter has been obtained exclusively by The Washington Post.

OpenAI forced staff to sign employee agreements that required them to waive their federal rights to whistleblower compensation, the letter said. Those agreements also require OpenAI staff to get prior consent from the company if they want to disclose information to federal authorities. OpenAI did not exempt its employees from non-disparagement clauses for disclosing securities violations to the SEC.

The letter said these overly broad agreements violated longstanding federal laws and regulations intended to protect whistleblowers who can disclose damaging information about their company anonymously and without fear of retaliation. want to show

“These contracts sent a message that 'we don't want … employees talking to federal regulators,'” said one of the whistleblowers, who spoke on condition of anonymity for fear of retaliation. “I don't think AI companies can develop technology that is safe and in the public interest if they shield themselves from scrutiny and dissent.”

get caught

Stories to keep you informed

In a statement, OpenAI spokeswoman Hannah Wong said, “Our whistleblower policy protects employees' rights to make safe disclosures. Additionally, we believe a robust debate about this technology is necessary and has previously Since then, we have made significant changes to our departure process to eliminate non-discrepancy conditions.

The whistleblowers' letter comes amid concerns that OpenAI, which started a A non-profit with an altruistic mission, prioritizing profit over safety in building its technology. The Post reported on Friday that OpenAI developed its latest AI model that fuels ChatGPT to meet a May release date set by company leaders, despite concerns from employees that the company may not be working on its own. Security testing “failed” to adhere to protocols that it said protected AI from catastrophic harm, such as teaching users to build bioweapons or helping hackers develop new types of cyberattacks. In a statement, OpenAI spokeswoman Lindsey Held said the company “has not made any reductions in our security processes, although we recognize that the launch was stressful for our teams.”

Tech companies' strict confidentiality agreements have long vexed activists and regulators. During the #MeToo movement and national protests in response to the killing of George Floyd, activists warned that such legal agreements limit their ability to report sexual harassment or racial discrimination. Regulators, meanwhile, are concerned that the terms leave tech employees who might report misconduct in the dark. The tech sector, particularly amid allegations that companies' algorithms promote content that undermines elections, public health and child safety.

The rapid advancement of artificial intelligence accelerated. Policymakers' concerns about the power of the tech industry have led to a flood of calls for regulation. In the United States, AI companies are largely operating in a legal vacuum, and policymakers say they cannot effectively craft new AI policies without the help of whistleblowers, who fear the potential risks posed by the rapidly advancing technology. Can help define risks.

“OpenAI's policies and practices appear to have a chilling effect on whistleblowers' right to speak and receive fair compensation for their protected disclosures,” Sen. Chuck Grassley (R-Iowa) said in a statement to The Post. “To stay one step ahead of the federal government's artificial intelligence, OpenAI's nondisclosure agreements must change.”

A copy of the letter, addressed to SEC Chairman Gary Gensler, was sent to Congress. The Post obtained the whistleblower's letter from Grassley's office.

Referred to official complaints The letter was submitted to the SEC in June. Stephen Cohen, a lawyer representing the OpenAI whistleblowers, said the SEC has responded to the complaint.

It could not be determined whether the SEC has opened an investigation. The agency did not respond to a request for comment.

The letter says the SEC should take “swift and aggressive” steps to address these illegal deals, as they may be relevant to the broader AI sector and may violate an October White House executive order. It calls for AI companies to develop the technology safely.

“At the heart of any such enforcement effort is the recognition that insiders must be free to report concerns to … federal authorities,” the letter said. “Employees are in the best position to detect and warn against the types of threats outlined in the executive order and are also in the best position to help ensure that AI benefits humanity rather than backfires. “

Those contracts threatened employees with criminal prosecution if they reported violations of the law to federal authorities under trade secret laws, Cohen said. He said employees were instructed to keep company information confidential and threatened with “severe sanctions” without recognizing their right to report such information to the government.

“In terms of monitoring AI, we're at the very beginning,” Cohen said. “We need employees to move forward, and we need OpenAI to be open.”

The SEC requires OpenAI to draft every employment, severance and investor agreement that includes non-disclosure clauses to ensure they don't violate federal laws, the letter said. Federal regulators should require OpenAI to notify all former and current employees of the company's alleged violations, as well as to notify them of any violations of the law confidentially and anonymously. The SEC reserves the right to report The SEC must issue penalties to OpenAI for “each improper agreement” under the SEC Act and direct OpenAI to cure the “chilling effect” of its past conduct, according to the whistleblowers' letter.

Several tech employees, including Facebook whistleblower Francis Hogan, have filed complaints with the SEC, which established a whistle-blowing program in the wake of the 2008 financial crisis.

Chris Baker, a San Francisco lawyer, said it has been a long battle to fight Silicon Valley's use of NDAs to “monopoly information.” He won a $27 million settlement for Google employees in December over claims that the tech company used strict confidentiality agreements to prevent whistle-blowing and other protected activities. Now tech companies are increasingly fighting back with smarter ways to block speech, he said.

“Employers have learned that the cost of a leak is sometimes much higher than the cost of litigation, so they're willing to take the risk,” Baker said.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment