Uproar in Silicon Valley over California's AI safety bill

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Artificial intelligence heavyweights in California are protesting a state bill that would force tech companies to adhere to stricter security frameworks, including creating a “kill switch” to shut down their powerful AI models. is, in an escalating battle over sophisticated regulatory control. Technology

California's legislature is considering proposals that would introduce new restrictions on tech companies operating in the state, including those run by the three largest AI startups OpenAI, Anthropic and Cohere, as well as big tech companies like Meta. Major language models are included.

The bill, which passed the state Senate last month and is set for a vote by its General Assembly in August, calls on AI groups in California to guarantee a newly formed state agency that Granted they will not develop models with “an effective capability,” such as building biological or nuclear weapons or aiding in cyber security attacks.

According to the proposed Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, developers would have to report their security testing and introduce a so-called kill switch to shut down their models.

But the law has drawn backlash from many in Silicon Valley because of claims it will force AI startups out of the state and prevent platforms like Meta from pursuing an open-source model.

Andrew Ng, a renowned computer scientist who leads AI projects at Alphabet's Google and China's Beidou and sits on Amazon's board, said, “If someone wants to come up with regulations to stifle innovation, it's hardly No one can do better than that.” “It creates massive liability for the dangers of science fiction, and thus instills fear in anyone who dares to innovate.”

AI's rapid growth and huge potential have raised concerns about the safety of the technology, with billionaire Elon Musk, an early investor in ChatGPT maker OpenAI, calling it an “existential threat” to humanity last year. stated. This week, a group of current and former OpenAI staff published an open letter warning that “frontier AI companies” lack sufficient oversight by governments and pose “serious threats” to humanity. “There are

The California bill was co-sponsored by the Center for AI Safety (CAIS), a San Francisco-based nonprofit run by computer scientist Dan Hendricks, a security advisor to Musk's AI startup, xAI. are CAIS is closely related to the effective altruism movement, popularized by imprisoned cryptocurrency executive Sam Banksman Freud.

Democratic state senator Scott Weiner, who introduced the legislation, said: “Fundamentally I want AI to succeed and innovation to continue, but let's try and get out before any security risks.”

He added that it is a “light touch bill”. . . That requires developers training large models to conduct basic security assessments to identify major risks and take reasonable steps to mitigate those risks.

But critics have accused Weiner of being overly restrictive and imposing a costly compliance burden on developers, especially smaller AI companies. Opponents also claim that the bill focuses on fictitious risks that increase the risk of “extreme” liability on founders.

One of the strongest criticisms is that the bill would undermine open-source AI models — in which developers make the source code freely available to the public, allowing developers to build on top of them — Like Meta's flagship LLM, Llama. The bill would make developers of open models potentially liable for bad actors who manipulate their models to cause harm.

Arun Rao, lead product manager for generative AI at Meta, said in a post on X last week that the bill is “unworkable” and would “kill open source.” [California]”

“The net tax impact of disrupting the AI ​​industry and driving companies out could be in the billions, as both companies and high-paid workers leave,” he added.

Weiner said of the criticism: “This is the tech sector, it doesn't like regulation, so it's not at all surprising to me that there would be a pushback.”

He said some of the answers were “not entirely correct”, adding that he was planning to make amendments to the bill that would clarify its scope.

The proposed amendments state that open-source developers will not be liable for models that “have a lot of fine-tuning,” meaning if the open-source model is substantially customized by a third party. If made accordingly, it will not be the responsibility of the group that made the original model. He also stated that the “kill switch” requirement would not apply to open source models.

Another amendment said the bill would only apply to large models “whose training costs at least $100m”, and therefore would not affect most small start-ups.

“It's the competitive pressures that are affecting these AI organizations that essentially encourage them to cut corners on security,” CAIS's Hendrycks said, adding that the bill is “realistic and reasonable” in that most people want “some basic oversight.

Yet one senior Silicon Valley venture capitalist said he was already fielding questions from founders about whether they would be required to leave the state as a result of the potential legislation.

“My advice to anyone who asks is that we stay and fight,” the man said. “But it will chill the open source and startup ecosystem. I think some founders will choose to leave.

Governments around the world have been taking steps to regulate AI over the past year as the technology has grown in popularity.

US President Joe Biden introduced an executive order in October aimed at setting new standards for AI safety and national security, protecting citizens from AI privacy threats and combating algorithmic discrimination. The UK government outlined plans for new legislation to regulate AI in April.

Critics are confused about the speed with which California's AI bill emerged and passed the Senate, overseen by CAIS.

The majority of funding for CAIS comes from Open Philanthropy, a San Francisco-based charity with roots in the effective altruism movement. It awarded CAIS around $9mn in grants between 2022 and 2023, in line with its “focus area of ​​potential threats from advanced artificial intelligence”. CAIS Action Fund, a division of the nonprofit that was founded last year, registered its first lobbyist in Washington, D.C., in 2023 and has spent about $30,000 on lobbying this year.

Weiner has received funding in several rounds from wealthy venture capitalist Ron Conway, managing partner of SV Angel and investor in AI startups.

Raed Ghani, a professor of AI at Carnegie Mellon University's Heinz College, said there was “somewhat of an overreaction” to the bill, adding that any legislation aimed at regulating the development of models rather than the use of the technology Special attention should be paid to the matters of

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment