What the EU’s tough AI law means for research and ChatGPT

Representatives of EU member governments approved the EU AI Act this month.Credit: Jonathan Raw/Noor Photo via Getty

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

EU countries are set to adopt the world’s first comprehensive rules to regulate artificial intelligence (AI). The EU AI Act places its strictest rules on the most dangerous AI models, and is designed to ensure that AI systems are safe and respect fundamental rights and EU values.

“This process is very consequential, in terms of how we think about AI regulation and sets a precedent,” says Rishi Bomsani, who researches the societal impact of AI at Stanford University in California.

The legislation comes as AI advances rapidly. This year is expected to see the release of new versions of generative AI models – such as GPT, which powers ChatGPT, developed by OpenAI in San Francisco, California – and existing systems to detect scams and disinformation. being used. China already uses a patchwork of laws to guide the commercial use of AI, and US regulation continues. Last October, President Joe Biden signed the nation’s first AI executive order, requiring federal agencies to take action to address AI threats.

EU countries’ governments approved the legislation on February 2, and the law now needs a final signature from the European Parliament, one of the EU’s three legislative branches. It is expected to happen in April. If the text is not changed, as policy watchers expect, the law will go into effect in 2026.

Some researchers have welcomed the act for its potential to encourage open science, while others fear it could stifle innovation. The nature Examines how the law will affect research.

What is the EU’s approach?

The EU has chosen to regulate AI models based on their potential risk, applying stricter rules to risky applications and outlining separate regulations for general-purpose AI models such as GPT, which have broad And there are unexpected uses.

The law bans AI systems that pose an ‘unacceptable risk’, for example those that use biometric data to infer sensitive characteristics, such as people’s sexual orientation. For high-risk applications, such as the use of AI in employment and law enforcement, certain obligations must be met. For example, developers must demonstrate that their models are secure, transparent and explainable to users, and that they adhere to privacy regulations and do not discriminate. For low-risk AI tools, developers still have to tell users that they are interacting with AI-generated content. The law applies to models operating in the European Union and any firm that violates the rules risks fines of up to 7% of its annual global profits.

“I think it’s a good approach,” says Dirk Hovey, a computer scientist at Bocconi University in Milan, Italy. AI has become increasingly powerful and ubiquitous, he says. “It makes perfect sense to develop a framework to guide its use and development.”

Kilian Vieth-Ditlmann, a political scientist at AlgorithmWatch, says some people don’t think the laws go far enough, citing military and national security purposes as well as AI in law enforcement and immigration. Exemptions for use are waived. A non-profit organization based in Berlin that studies the impact of automation on society.

How much will it affect researchers?

In theory, very little. Last year, the European Parliament added a clause to the draft act that would exempt AI models developed purely for research, development or prototyping. Joanna Bryson, who studies AI and its regulation at Berlin’s Herti School, says the EU has worked hard to ensure the act does not negatively impact research. “They don’t really want to end innovation, so I’d be surprised if that’s going to be a problem.”

The European Parliament must give the final green light to this law. Voting is expected in April.Credit: Jean-Francois Badias/AP via Almy

But the process is still likely to have an impact, Howie says, by forcing researchers to think about transparency, how they report their models and potential biases. “I think it will filter and promote good practice,” he says.

Robert Kaczmarczyk, a physician at the Technical University of Munich in Germany and co-founder of LAION (Large-Scale Artificial Intelligence Open Network), a non-profit organization that aims to democratize machine learning, worries that the law is too small. Can stop companies that drive. research, and which may require establishing internal structures to enforce the rules. “It’s really hard to adapt as a small company,” he says.

What does this mean for powerful models like GPT?

After heated debate, policymakers chose to regulate powerful general-purpose models—such as generative models that create images, code, and video—in their own two-tier category.

The first tier includes all general-purpose models, except those used only in research or published under an open source license. These will be subject to transparency requirements, including detailing their training methods and energy consumption, and must demonstrate that they respect copyright laws.

A second, much more stringent, tier would cover general-purpose models that are deemed to have “higher impact potentials,” posing greater “systemic risk.” Bomsani says these models will be subject to “some very important obligations”, including rigorous security testing and cyber security checks. Developers will be made to release details of their architecture and data sources.

For the EU, ‘big’ effectively equates to dangerous: any model that uses more than 1025 FLOPs (number of computer operations) in training qualify as high impact. Training a model with that amount of computing power costs between US$50 million and $100 million, Bomsani says. It should take over models like GPT-4, OpenAI’s current model, and may include future iterations of Meta’s open-source competitor, LLaMA. Open source models are subject to regulation at this level, although only research models are exempt.

Some scientists are against regulating AI models, preferring to focus on how they are used. “Smarter and more capable doesn’t mean more damage,” says Jenia Jitsev, an AI researcher at the Jülich Supercomputing Center in Germany and another co-founder of LAION. Jitsuo added that there is no scientific basis for regulation on any measure of competence. They use the analogy of describing all chemistry as dangerous that consumes a certain number of person-hours. “It’s just as inconclusive.”

Will this act boost open source AI?

EU policymakers and open source advocates have hope. Howie says the act encourages making AI information available, replicable and transparent, which is almost like “reading the manifesto of the open source movement.” Bomsani says some models are more open than others, and it’s still unclear how the language of the act will be interpreted. But he thinks lawmakers intend to exempt general-purpose models, such as the LLaMA-2 and Paris-based startup Mistral AI.

Bomsani says the EU’s approach to encouraging open-source AI is notably different from the US strategy. “The EU’s argument is that open source is vital for the EU to compete with the US and China.”

How will this act be implemented?

The European Commission will create an AI office to oversee general-purpose models with advice from independent experts. The Office will develop methods for evaluating the capabilities of these models and monitoring the associated risks. But even if companies like OpenAI comply with the rules and submit, for example, much of their data set, Jaitsev questions whether the public body has adequate vetting of the submissions. There will be resources for that. “The demand for transparency is very important,” he says. “But little thought has been given to how to perform these procedures.”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment