California's state senate passed a bill last month to regulate the development and training of advanced, sophisticated AI models, with the goal of ensuring that bad actors can't use them for nefarious purposes. . The passage of the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act is causing uproar among developers, many of whom operate out of state, who argue that it could severely stifle innovation. Is. However, proponents of the bill believe that the rapidly evolving technology requires sewers and that the legislation is a reasonable — if limited and incomplete — first step toward legally enshrining safety best practices.
State Senator Scott Weiner introduced SB 1047 in the California Senate in February, and the bill is now being amended in the Assembly. June's amendments to the bill clarified several provisions that had created definitional confusion, but also introduced language that essentially guarantees the law applies primarily to the largest AI model developers. Will be., Anthropic, Google DeepMind, OpenAI, and Meta, among others. The bill will face an Assembly vote in August to determine whether it will become law.
At its core, SB 1047 aims to make developers of large-scale AI models legally accountable for providing reasonable assurances that their models do not contain dangerous capabilities that could cause serious harm. The bill's definition of significant harm includes the creation of chemical, biological, radiological, or nuclear weapons that could cause mass casualties. Cyber attacks on critical infrastructure causing at least $500,000,000 in damage. or actions of an autonomous AI model that cause the same level of harm, harm to humans, or theft, damage to property, or other threats to public safety. To avoid such pitfalls, developers must be able to completely shut down their models, effectively building in kill switches.
The proposed law applies to AI models “trained using more than 10^26 integer or floating-point operations of computing power.” [FLOPS]and the cost of that amount of computing power would exceed one hundred million dollars ($100,000,000).” In other words, the bill primarily targets the largest AI models of the future (presumably today's advanced ones). (larger than the majority of models) are built by companies that can afford some of the most expensive model training. This training uses the same metric for computing power (10^26 FLOPS) that the Biden administration's AI cites. The executive order specifies the models covered by the bill.Unless a model is exempted by law, its developers must submit an annual certification of compliance with the model, and the AI containing their models. Safety incidents must be reported to a New Frontier Model Division within the California Department of Technology.
The debate surrounding Senate Bill 1047 highlights an important point: Regulators will need to walk a fine line to enact legislation that protects against the future risks of both frontier models — with the common goal of AI is loosely defined as — at the cutting edge of development — without negatively impacting innovation. . How the bill's development and potential implementation may foreshadow broader regulatory debates.
Advantages and disadvantages. SB 1047 has always faced scrutiny for trying to regulate world leaders in AI on their home turf, and California's regulation could affect other state or federal policies in a year where the United States reportedly More than 600 AI bills are being considered in the United States. That said, from a policy perspective, SB 1047 doesn't necessarily read as a controversial attempt to rein in big tech at the expense of AI development. This requires developers to take reasonable steps to prevent their models from causing social harm, such as basic security testing and other (nascent) industry best practices. It does not enforce strict liability against developers., This means developers are only legally liable for damages caused by their models if they fail to take precautions or if they commit perjury in reporting model capabilities or AI events. There are. Developers will not be penalized for paperwork mishaps. Nor will they be penalized if a model causes damage, if the developer makes a good faith effort to report the risk to the Frontier Model Division.
Prominent voices in the AI safety community have offered support, but others say the bill doesn't go far enough to protect the public from AI threats, and that it merely enforces safeguards. A fundamental first step towards reducing risks at scale. Big AI developers who are currently self-policed. AI safety advocates insist that AI firms should not be considered unique compared to other industries in needing regulation.
However, developers are concerned that the bill could stifle innovation by limiting the development and fine-tuning of open foundation models (although few, if any, of today's open models may be affected by the bill). Foundation models are large-scale, general-purpose models trained on large datasets that can perform a variety of tasks (think creative AI), while open models are those whose weights (algorithmic parameters) can be adjusted to any The user can adjust the model output with skills and resources. Open model approaches to development using multiple user inputs to improve performance and uncover security vulnerabilities.
Supporters of open models fear that developers could be held legally liable by third parties for weight adjustments in their models that result in damages, and argue that the bill would The beneficial open model ecosystem will be threatened. While this may indeed be true for the AI systems covered by the proposed legislation, given the size and capabilities of the foundation models involved and the level of harm described in the bill, there is likely to be caution in weighing the open model. . It is important to limit opportunities for malicious actors, including state actors, to attempt to use powerful models in politically destructive or socially harmful ways. Other concerns include concerns that developers cannot reasonably anticipate or insure against all harmful misuse of their models, and that it is unreasonable to expect them to do so. Is. Some also argue that state regulations could conflict with the eventual mandatory federal regulatory framework and create process burdens for AI firms.
Perhaps surprisingly, opponents of SB 1047 include TechNet, a network of technology companies that includes Anthropic, Apple, Google, Meta, and OpenAI, as well as the California Chamber of Commerce, the California Civil Justice Association, and California manufacturers included. and technology. That said, the number of companies affected by the legislation in the short term is small, and its near-term chilling effects on innovation appear limited. Much of the concern about the bill from the private sector is prospective, and relates to the potential regulatory burden that small companies may one day bear (since small companies are currently exempt from the training funding outlined in the bill). or unlikely to access the compute).
The challenge of agile policymaking. One hotly debated aspect of the debate surrounding the bill is the potential implications for tech companies of moving from a voluntary self-regulatory approach to a mandatory approach to model safety. At the federal level, AI regulation to date has relied on a combination of so-called soft law mechanisms, which are non-legally binding policies as well as voluntary compliance by companies with agency guidelines for the use of AI. Adds Thus, when it comes to big-picture questions about AI safety, voluntary compliance and good faith responsible behavior on the part of AI developers has largely been the name of the game in the US. See, for example, President Biden's summit with tech executives at the White House, or the National Institute of Standards and Technology's publicly available and voluntary AI risk management framework.
When government capacity in areas such as resource provision and human capital or expertise are limiting factors, a soft law approach may be a sensible approach. Congressmen and their staffs face a speed problem: Regulatory and ethical frameworks often struggle to keep up with advances in emerging technology due to the complexity and technical expertise required. Also, AI can often attack politically sensitive and socially complex issues. Members of the US Congress themselves have admitted that they and their colleagues lack an understanding of artificial intelligence and have struggled to reach consensus on the most relevant types of threats, leading to AI regulatory initiatives slow pace. Federal agencies are introducing guidance for AI or explaining how existing policies can cover applications in specific areas. This approach offers the advantage of not having to cut new regulations for AI from whole cloth, but it is, in general, a narrow regulatory approach.
As SB 1047 represents a move from a soft law to a statutory liability approach, the hostile industry response may signal similar challenges to such a transition at the federal level. Major AI developers have already begun implementing security practices and reporting to varying degrees in model development and plans to avoid major AI risks at scale, yet they have California has opposed the legislation of It's possible that tech firms would have supported a differently structured security regulation, and that they found the bill's particular approach offensive. Nevertheless, walking the hard line between soft and hard policies for AI regulation, especially considering the resistance of the private sector, appears to be difficult for policymakers.
The episode also highlights the difficulty of agile, forward-looking legislation. SB 1047 seeks to prevent the gap between technology development and public authority and policy from widening., Using loose definitions that do not reduce risks while taking an overly prescriptive approach to risk reduction. Yet definitional uncertainty has allowed opponents of the bill to speculate, sometimes incorrectly, that strict enforcement of the bill would hurt businesses. Perhaps as a result, a provision in the bill that would have originally covered future more efficient models that perform and train at 10^26 FLOPS in 2024 was removed in the latest round of amendments. was This essentially reduces the likelihood that the bill's security requirements will apply to small businesses, startups, or academic researchers with less compute and less training than large companies due to future improvements in algorithmic efficiency. Can develop and train powerful models at cost.
Ultimately, SB 1047 is likely to remain a lightning rod for some AI security advocates and industry developers alike as it works its way through the legislation. Despite adopting legislative procedures intended to prevent the bill from becoming obsolete, the draft legislation generated pushback that led to changes that could reduce its longevity. As broader regulatory discussions take place around the federal and United States, regulators should take note.