Top AI researchers call on OpenAI, Meta and more to allow independent research.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

More than 100 advanced artificial intelligence researchers have signed an open letter calling on generative AI companies to allow investigators access to their systems, arguing that the company’s vague rules could cost them millions. Preventing users from using security testing tools.

Strict protocols designed to prevent bad actors from abusing AI systems are having a chilling effect on independent research, researchers say. Such auditors fear that their accounts will be banned or they will be sued if they try to test the safety of AI models without the company’s blessing.

The letter was signed by experts in AI research, policy and law, including Percy Liang of Stanford University; Pulitzer Prize-winning journalist Julia Inguin; Renée DiResta from the Stanford Internet Observatory; Mozilla colleague Deb Raji, who has pioneered research into auditing AI models; former government official Marietje Schock, former member of the European Parliament; and Brown University professor Suresh Venkata Subramanian, a former adviser to the White House Office of Science and Technology Policy.

The letter, sent to companies including OpenAI, Meta, Anthropic, Google and Midjourney, urges the tech firms to provide legal and technical safe harbor for researchers to question their products.

“Generative AI companies should avoid repeating the mistakes of social media platforms, many of which have effectively banned research aimed at holding them accountable,” the letter said.

The effort comes as AI companies are increasingly aggressive in shutting out auditors outside of their systems.

OpenAI claimed in recent court documents that the New York Times’ efforts to find potential copyright violations “Hacking” Its ChatGPT chatbot. The new terms of the meta say it will. Cancel the license For LLaMA 2, its latest major language model, if a user alleges that the system infringes intellectual property rights. Movie studio artist Reid Southern, another signatory, had multiple accounts banned while testing whether the image generator Midjourney could be used to create copyrighted images of movie characters. after the He shed light As a result, the company amended its terms of service to include threatening language.

“If you knowingly infringe someone else’s intellectual property, and it costs us money, we will find you and collect that money from you,” the terms say. “We can do other things, like try to get you to court to pay our legal fees. Don’t.”

An accompanying policy proposal, co-authored by some of the signatories, says that OpenAI updated its terms to protect academic safety research after reading an initial draft of the proposal, “although some ambiguities remains.”

AI companies’ policies generally prohibit users from using the Service to create misleading content, commit fraud, violate copyright, influence elections or harass others. Users who violate the Terms may have their accounts suspended or banned without an opportunity to appeal.

But in order to conduct independent investigations, researchers often deliberately break these rules. Because testing takes place under their own logins, some fear that AI companies, which are still developing ways to monitor potential rule breakers, could be disproportionate. Crack down on users who bring negative attention to your business.

Although companies such as OpenAI offer special programs to grant access to researchers, the letter argues that this setup promotes favoritism, with companies hand-picking their evaluators.

External research has uncovered weaknesses in widely used models such as GPT-4, such as the ability to break security measures by translating English input into less commonly used languages ​​such as Hmong.

In addition to the safe harbor, companies should provide direct channels so that outside researchers can tell them about problems with their tools, said Borhane Blei Hamlin, a researcher with the nonprofit AI Risk and Vulnerability Alliance.

Otherwise, the best way to gain visibility for potential harm may be to embarrass a company on social media, which reduces the types of risks that harm the public that are investigated and investigated. Companies are left in an adversarial position.

“We have a broken surveillance ecosystem,” said Bailey Hamlin. “Sure, people get problems. But the only channel to influence is these ‘gutke’ moments where you catch the company with its pants down.”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment