This spring, Colorado passed the nation's first comprehensive law on how companies and governments use artificial intelligence to make important decisions about people's lives.
“Whether (people) get insurance, or what their insurance rate is, or legal decisions or employment decisions, whether you're fired or hired, can depend on an AI algorithm, ” warned Democratic state Rep. Briana Titone, one of the bill's main legislative sponsors.
The statute is not intended to cover serious counterfeiting or fraud, which some states, including Colorado, have addressed. in other laws, but it applies to how AI is used to assess people for things like school applications, jobs, loans, access to health care or insurance.
It takes effect in 2026 and requires companies and some government agencies to notify people when AI systems are used. If someone feels the technology has treated them unfairly, the law allows them to correct that data or file a complaint. It establishes a process for investigating bad actors.
“If you're fired by an AI process and you say, 'Well, that's impossible, there's no way to fire me,'” Titone said, “you're the attorney general's office.” You can find a resolution from the office to say, 'We need someone to step in and double-check that this process is not actually discriminatory and biased against this person.'”
He said that in some cases AI has been found to benefit people based on their names or hobbies such as, “If your name is Jared and you played lacrosse.”
Democratic state Rep. Manny Rotnell, another sponsor, said some provisions require companies to identify how algorithms can lead to discrimination and demonstrate that data is used to train systems. How is used to give.
“We still have a lot to do,” Rutinal said. “But I think this is a huge first step, a very important and strong first step to making sure that technology works for everyone, not just a privileged few.”
Colorado's initiative is being watched by other states.
The Colorado law was inspired by a similar proposal introduced in Connecticut earlier this year, which failed to pass there. Other locations have established stricter policies. New York City requires employers using AI technologies to conduct independent “biased audits” of certain software tools and share them publicly.
“So states are obviously looking at each other to see how they can put their stamp on the regulation,” said Helena Almeida, vice president and managing counsel of ADP, which develops AI payroll services for several large companies. she does.
“It's certainly going to impact all employers and deployers of AI systems,” said Alameda of the Colorado law.
Matt Shearer, an attorney at the Center for Democracy and Technology, said companies have been using various automated systems, also known as AI, to make employment decisions for at least the past eight years.
“We have really little insight into how companies are using AI to decide who gets a job, who gets a promotion, who gets an apartment or a mortgage or a home or access to healthcare. And it's a situation that's not sustainable because, again, these decisions are making major impacts on people's lives,” he said.
But he's concerned that Colorado law doesn't give individuals a specific right to sue for AI-related damages.
“There is certainly a lot of concern among labor unions and civil society organizations that this bill doesn't have enough teeth to really force companies to change their behavior.”
Plans to change the law are already underway – this is just the beginning.
When Democratic Gov. Jared Polis signed SB24-205 in May, he told lawmakers he did so with reservations, writing, “I am concerned about the impact this law will have on this industry. is fueling important technological advances for consumers across our state and enterprises alike.”
He said it is best decided by the federal government so there is a national approach and a level playing field.
However, Polis said he hopes Colorado's law will advance the AI debate, especially at the national level, and he asked lawmakers to improve it before it goes into effect. A state task force will meet in September to make recommendations in February. Polis has outlined areas of concern and asked them to focus regulations on software developers and not small companies that use AI systems.
Polis said the law could be used to target AI users even when it is not intentionally discriminatory.
“I want to be clear in my goal that Colorado remains the home of cutting-edge technologies and that our customers have full access to important AI-based products,” he wrote.
The industry is looking at this law and others are likely to come.
Michael Brent, of Boston Consulting Group, works with companies as they develop and deploy AI systems to identify and try to mitigate the ways AI harms communities.
“Companies want to build systems that are faster, cheaper, more accurate, more reliable, less damaging to the environment,” he said. Colorado's law could encourage transparency for people affected by AI, he said.
“They can go into that space where they're having that moment of critical reflection, and they can just say to themselves, 'You know what? I actually don't want a machine learning system to process my data in this conversation. I would prefer to opt out by closing this window or calling a human.
Colorado is very much at the beginning of figuring it out with the tech industry, said Representative Titon, a Democrat, to focus on creating comprehensive regulations.
“We have to be able to communicate and understand what these issues are and how they can be abused and misused.”
Bente Birkeland covers state government for CPR News.