States are struggling to contain surveillance as AI permeates everyday life.

While artificial intelligence made headlines with ChatGPT, behind the scenes, the technology has quietly permeated everyday life — screening job resumes, rental apartment applications, and even medical care in some cases. To determine

While a number of AI systems have been found to discriminate, tipping the scales in favor of certain races, genders or incomes, there is little government oversight.

Legislators in at least seven states are taking major legislative swings to curb bias in artificial intelligence, filling the void left by congressional inaction. The proposals are some of the first steps in a decades-long debate over balancing the benefits of this new technology with the widely documented risks.

“AI actually affects every part of your life whether you know it or not,” said Brown University professor Suresh Venkata Subramanian, who co-authored the White House’s blueprint for the AI ​​Bill of Rights. .

“Now, you wouldn’t care if they all worked out. But they don’t.”

Success or failure will depend on lawmakers working through complex issues while negotiating with an industry worth hundreds of billions of dollars and growing light years at best.

12 out of 200 bills

Last year, only about a dozen of the nearly 200 AI-related bills introduced in statehouses were passed into law, according to BSA The Software Alliance, which advocates on behalf of software companies.

Along with these bills, more than 400 AI-related bills are being debated this year, mostly aimed at regulating small pieces of AI. It includes nearly 200 targeted deepfakes, including proposals to ban obscene deepfakes, such as Taylor Swift’s, which flooded social media. Others are trying to rein in chatbots, such as ChatGPT, to make sure they don’t ingest instructions to build a bomb, for example.

It is estimated that about 83% of employers use algorithms to assist in hiring. Yet the majority of Americans are unaware that these devices are being used.

These are separate from seven state bills that would apply across industries to regulate AI discrimination — one of technology’s most perverse and complex issues — being debated from California to Connecticut.

Those who study AI’s propensity to discriminate say states are already lagging behind in setting up guardrails. The use of AI to make consequential decisions — what Bill calls “automated decision tools” — is widespread but largely invisible.

It is estimated that around 83% of employers use algorithms to assist in hiring. According to the Equal Employment Opportunity Commission, it’s 99% for Fortune 500 companies.

Yet with the majority of Americans unaware that these tools are being used, a Pew Research poll shows whether the systems are biased.

Historical data bias

An AI can learn bias through the data it’s trained on, usually historical data that can hold a Trojan horse of past discrimination.

Amazon scrapped its hiring algorithm project nearly a decade ago after it found it favored male applicants. The AI ​​was trained to assess new resumes by learning from previous resumes — mostly male applicants. Although the algorithm did not know the applicants’ genders, it still downgraded resumes with the word “women” or listed women’s colleges, in part because they were underrepresented in this historical data. Not done from what he learned.

“If you’re letting AI learn from the decisions that existing managers have made historically, and if those decisions have historically favored some and disfavored others, that’s what the technology will learn,” Christine Weber said, an attorney in a class action. The lawsuit alleges that an AI system that scores rental applicants discriminates against people who are black or Hispanic.

In court documents, the plaintiff in the case, Mary Louise, a black woman, applied to rent an apartment in Massachusetts and received a cryptic response: “A third-party service that we use to screen all potential tenants. Do, he has refused your tenancy.”

When Louis submitted two references from the landlord to show he had paid rent early or on time for 16 years, according to court records, he received another response: “Unfortunately, we appeals.” does not accept and cannot override the tenant screening results.”

California measurements

A lack of transparency and accountability is, in part, what the bills are targeting, following California’s failed proposal last year — the first comprehensive effort to regulate AI bias in the private sector.

Under the bills, companies using these automated decision-making tools would have to conduct an “impact assessment,” including an explanation of how AI factors into a decision, an analysis of the data collected and the risks of discrimination, and the company. Clarifying the considerations of Depending on the bill, those assessments may be submitted to the state or requested by regulators.

The software industry lobbying group BSA said its members generally favor some of the proposed measures, such as impact assessments.

Some bills would also require companies to tell consumers that an AI will be used to make decisions, and allow them to opt out with some caveats.

Craig Albright, senior vice president of U.S. government relations at the BSA, an industry lobbying group, said its members generally favor some of the proposed measures, such as impact assessments.

Industry backing

“Technology moves faster than the law, but there are actually benefits to catching up with the law. Because then (companies) understand what their responsibilities are, consumers can have more confidence in the technology,” Albright said. said

But it has been a weak start to the legislation. A bill in Washington state has already stalled in committee, and a California proposal in 2023, on which many of the current proposals were modeled, also died.

California Assemblywoman Rebecca Bauer-Kahn has revamped her legislation, which failed last year with the support of some tech companies, such as Workday and Microsoft, after removing a requirement that companies routinely provide their Provide an impact assessment. Other states where bills have been introduced, or are expected to be introduced, are Colorado, Rhode Island, Illinois, Connecticut, Virginia and Vermont.

While the bills are a step in the right direction, said Brown University’s Venkatasbramaniam, impact assessments and their ability to capture bias are unclear. Without much access to reports — which many bills limit — it’s hard to even know if a person has been discriminated against by AI.

A more nuanced but accurate way to identify discrimination would be to require a bias audit — to determine whether an AI is discriminating — and make the results public. This is where the industry pushes back, arguing that this would expose trade secrets.

Requirements for routine testing of AI systems are not in most legislative proposals, almost all of which still have a long way to go. Still, it’s just the start of lawmakers and voters trying to figure out what is becoming, and will remain, an ever-present technology.

“It covers everything in your life. That alone should make you care,” Venkatasbramaniam said.

Copyright 2024 Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

Titles
California InsurTech Data-Driven Artificial Intelligence

Leave a Comment