A patchwork of state AI laws is creating ‘mess’ for US businesses.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

The laws governing artificial intelligence vary depending on where you are in the U.S., a growing source of confusion for businesses racing to capitalize on the rise of AI.

This year, lawmakers in the state of Utah are debating legislation that would require certain businesses to disclose whether their products interact with consumers without using humans.

In Connecticut, state lawmakers are considering a bill that would impose stricter restrictions on transparency about the inner workings of AI systems deemed “high risk.”

They are among 30 states (plus the District of Columbia) that have proposed or adopted new laws directly or indirectly restricting how AI systems are designed and used.

The legislation targets everything from child protection and data transparency to reducing bias and protecting consumers from AI-driven decisions in healthcare, housing and employment.

“It’s really a mess for business,” Goli Mehdavi, an attorney at Brian Q. Leighton Paisner, said of the bills still in development and the newly enacted laws. “It’s just a lot of uncertainty.”

The complexity of laws across the U.S. stems from Washington’s lack of action to offer direct federal regulation of the rapidly evolving technology, largely because not all U.S. lawmakers agree that police New rules are needed for AI.

Things are different in other parts of the world. The European Union passed a comprehensive AI law this year called the AI ​​Act. And China has adopted more politically focused AI laws that target AI-generated news distribution, deep fakes, chatbots and datasets.

Yet the state laws being debated or enacted in the U.S. reflect priorities set by the federal government, Mahdavi said.

President Biden, for example, directed AI developers and users to implement AI “responsibly” in an executive order issued last October. Then in January, the administration added a requirement for developers to disclose the results of their security tests to the government.

Vice President Kamala Harris applauded when President Joe Biden signed an executive order on the development and use of artificial intelligence last October. (Photo by Brendan Smielowski/AFP) (BRENDAN SMIALOWSKI via Getty Images)

State laws share some common themes, but their subtle differences can make business compliance a challenge.

California, Colorado, Delaware, Texas, Indiana, Montana, New Hampshire, Virginia, and Connecticut have adopted consumer protection laws to inform consumers about automated decision-making and to create legally significant effects. Wally gives you the right to opt out of profiling technology.

Laws broadly prohibit companies from subjecting consumers to automated decision-making technology without their consent.

Businesses cannot, for example, profile users based on their work performance, health status, location, financial status and other factors, unless they expressly agree.

Colorado statutes extend further to prohibit AI from generating discriminatory insurance rates.

However, the term “automated decision making,” which appears in most statutes, is defined differently among states.

In some cases employment or financial services decisions are no longer considered automated as long as the decision is made using some level of human involvement.

New Jersey and Tennessee have so far stopped short of implementing opt-out provisions. States, however, require those who use AI for profiling and automated decision-making to conduct risk assessments to ensure the safety of consumers’ personal data.

In Illinois, a law set to take effect in 2022 prevents employers from using AI in video assessments of job candidates. Candidate consent is required before an employer can use AI to assess a candidate’s video image.

Springfield, Ill. (Antonio Perez/Chicago Tribune/Tribune News Service via Getty Images) (Chicago Tribune via Getty Images)

In Georgia, a narrow law designed to allow ophthalmologists to use AI went into effect in 2023. The law requires that AI tools and equipment used to analyze eye images and other eye assessment data cannot be relied upon solely to prepare an initial prescription, nor can a first-time prescription be relied upon. Renewal of

New York has become the first state to require that employers conduct bias audits of their AI-powered employment decision tools. This law came into effect in July 2023.

A number of states followed this trend more broadly, requiring that organizations and individuals using AI conduct a data risk assessment before using the technology to process consumer data.

What’s helping many states get these laws through their legislatures is “the historicity of one-party control,” said Scott Babwa Brennan, head of online expression policy at UNC-Chapel Hill’s Center on Technology Policy. level”.

Last year, nearly 40 states had state legislatures dominated by one party. This number has doubled from 17 such controls in 1991.

Click here for the latest technology news that will impact the stock market.

Read the latest financial and business news from Yahoo Finance

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment