Colorado has enacted a first-of-its-kind Artificial Intelligence Act that regulates the development and use of artificial intelligence.
Here are five things you should know about the Colorado AI Act in its current form—and how it could change before it takes effect.
1. The framework of the Act will be ready before it comes into effect in 2026.
Although the AI Act won't go into effect until February 2026 at the earliest, Colorado is already facing growing pressure to change the law due to concerns about unintended effects on consumers and businesses.
Colorado Gov. Jared Polis said in a letter that lawmakers plan to revise the law “to ensure that the final regulatory framework will protect consumers and support Colorado's leadership in the AI sector.” ”
2. The Act applies primarily to high-risk AI systems.
The Act applies only to “high-risk artificial intelligence systems” or “any artificial intelligence system that, when deployed, makes, or is a significant factor in making, a consequential decision”.
- Artificial Intelligence System:”[A]ny machine-based system that, for any explicit or implicit purpose, infers from the inputs the system receives how to produce outputs. . . which may affect the physical or virtual environment.”
- A consequential decision: “A decision which has a material legal or similar significant effect on the provision or refusal of someone. [Colorado resident] of, or costs or conditions:
- Educational enrollment or educational opportunity.
- Job or employment opportunity.
- A financial or lending service.
- An essential government service, health care service, accommodation, insurance or legal service.
Despite several exceptions for systems that perform narrow procedural functions or enhance decision-making, these definitions can be broadly interpreted to apply to a wide range of technologies.
The Governor's letter makes clear that revisions to the Act will refine the definitions to ensure that the Act only governs the highest-risk systems.
As a result, the Act in its final form is likely to apply only to AI systems that genuinely affect decisions that have a material legal or similarly significant impact on designated critical services.
3. Developers have a duty to avoid algorithmic discrimination.
The Act applies to anyone who does business in Colorado and develops or knowingly and substantially modifies a high-risk artificial intelligence system. This requires them to use reasonable care to protect users from algorithmic discrimination.
Developers must make documentation available to deployers or other system developers. The documents must show, among other things:
- The system's purpose, intended benefits, and reasonably foreseeable use.
- The type of data used to train the system and the governance measures implemented in the training process.
- System limitations.
- Evaluation performed on the system to remove algorithmic discrimination.
- Measures taken to reduce risks of algorithmic discrimination.
- How the system should be used, not used and monitored.
- Any other information reasonably necessary to help deployers fulfill their obligations under the law.
In its current form, the Act requires developers to immediately notify the Colorado Attorney General and known deployers/developers of any algorithmic discrimination issues. The Governor's letter, however, indicates an intention to move to a more traditional enforcement framework without mandatory proactive disclosures.
4. Deployers also have a duty to avoid algorithmic discrimination.
The act also requires anyone who does business in Colorado and uses high-risk artificial intelligence systems to use reasonable care to protect consumers from algorithmic discrimination related to such systems. To the appointees:
- Implement a risk management policy and program to govern the deployment of high-risk artificial intelligence systems.
- A comprehensive impact assessment for high-risk artificial intelligence systems.
As passed, the Act would require deployers to proactively notify the Colorado Attorney General of any algorithmic discrimination. The governor's letter, though, indicates that Colorado plans to move to a more traditional enforcement framework without mandatory proactive disclosures.
Additionally, the letter said lawmakers plan to amend the act to focus regulation on developers of high-risk artificial intelligence systems rather than the smaller companies that deploy them. As a result, we may see scaled back deployer obligations or broader deployment exemptions in the final implemented regulatory framework.
5. The law gives users rights related to artificial intelligence systems.
Developers and deployers must provide users with a public statement summarizing the types of high-risk artificial intelligence systems they develop or use, and how they mitigate algorithmic discrimination risks.
Deployers must also inform users when they use a high-risk artificial intelligence system to build a productive system or when such a system is a critical factor in decision-making. They should do this before making a decision. They must also provide the user with information about the decision and, where available, the right to opt out.
If a high-risk artificial intelligence system results in an adverse user decision, the deployer must:
- DISCLOSE TO USERS:
- The actual reason or reasons for the decision.
- The extent to which the system contributed to the decision.
- The types of data and their sources processed by the system in making decisions.
- Provide an opportunity to correct data processed by the system to make a decision.
- Provide opportunity for appeals against decisions and for human review.
Finally, the Act requires that any artificial intelligence system (whether high-risk or not) that is intended to interact with users must include a disclosure that the user is interacting with the artificial intelligence system. Is.
What does this mean for your business?
While the final form of the Colorado Artificial Intelligence Act may differ from the version approved by the state legislature, businesses should begin preparing for content AI regulation by:
- Developing an organizational framework for assessing and managing AI-related risks.
- Preparation of records and documentation for AI The business outlines how the systems were developed, how they should be used, and the steps taken to mitigate the risks associated with their use.
- Establish a process to assess the risks and potential impacts of deploying third-party AI.
- Enhancing organizational procedures to take into account the unique risks of AI, including third-party contracting and management procedures.