Colorado Artificial Intelligence Act

What you need to know about the CAIA:

To Whom Does the CAIA Apply?

The CAIA applies to developers and deployers of “high-risk AI systems.”

Developer v. Deployer:

  • Deployer: a person/organization doing business in CO that deploys a high-risk AI system.
    • Think: insurance company who buys and uses an AI system to determine rates for applicant customers.
  • Developer: a person/organization doing business in CO that develops or intentionally and substantially modifies an AI system.
    • Think: company that either builds its own or significantly modifies an available AI system.
What is Considered High Risk?

A high-risk AI system is any AI system that, when deployed, makes, or is a substantial factor in making, a decision that has a material, legal, or similarly significant effect on the provision, denial, or cost of the following things to a consumer:

  • Educational enrollment/opportunity;
  • Employment/employment opportunity;
  • Financial/lending service;
  • Essential government service;
  • Health-care services;
  • Housing;
  • Insurance; or
  • Legal services.

It does not include AI systems intended to perform a narrow procedural task or detect decision-making patterns or certain technologies related to security, data storage, website functionality, and communication with consumers (the law contains a specific list) where they don’t impact consequential decisions.

What is Algorithmic Discrimination:

Algorithmic discrimination is any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.

It does not include the offer, license, or use of a high-risk artificial intelligence system by a developer or deployer if it is solely for the purpose of:

  • Self-testing to identify, mitigate, or prevent discrimination, or to ensure compliance with state and federal law.
  • Expanding an applicant, customer, or participant pool to increase diversity or address historical discrimination.
  • An act or omission by or on behalf of a private club or other establishment that is not open to the public, as outlined in Title II of the federal “Civil Rights Act of 1964.

Key Components of Colorado’s Data Privacy Law

Developer Obligations
Duty to Avoid Algorithmic Discrimination:

Developers of high-risk AI systems have a duty to use reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination that could arise from intended and contract uses of their systems. Developers are granted a rebuttable presumption that they used reasonable care (if they comply with this law).
 

Transparency Obligations:

Developers must make available to deployers or other developers of high-risk AI systems:

  • A description outlining the reasonably foreseeable uses and known harmful/inappropriate uses of the high-risk AI system;
  • Documentation providing a high-level summary of the type of data used to train the high-risk AI system as well as its known or reasonably foreseeable limitations including the risk of algorithmic discrimination arising from the intended use;
  • Documentation describing the purpose of the high-risk AI system;
  • Documentation describing the intended benefits and uses of the high-risk AI system;
  • Any additional documentation reasonably required to help the deployer understand the outputs and monitor the system’s performance for risks of algorithmic discrimination;
  • A description of how the high-risk AI system was evaluated for performance and mitigation of algorithmic discrimination before it was made available to the deployer;
  • A description of the data governance procedures used to cover the training datasets and the measures used to examine the suitability of data sources, the possible biases, and any appropriate mitigation thereof;
  • Documentation as well as the information needed for a deployer to complete an impact assessment.

 

Web Notification

The developer must make available on their website or in a public use case inventory, a statement summarizing both the types of high-risk systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer; and how they manage known or reasonably foreseeable risks of algorithmic discrimination that could appear. This information must be updated within 90 days after intentional and substantial modification of any high-risk AI system.
 

Notification of Algorithmic Discrimination

Developers must notify the AG and deployers or developers within 90 days of any deployed system that has caused or is likely to cause algorithmic discrimination.

Deployer Duty of Care

Deployers have a duty of care to protect consumers from known or foreseeable risks of algorithmic discrimination. Deployers will be presumed to have used reasonable care if they comply with the law.

Deployer Transparency Obligations

Deployers must:

  • Establish a risk management policy and program to oversee the deployment of high-risk artificial intelligence systems;
  • Annually and within 90 days after any intentional and substantial modification to the high-risk AI system complete an impact assessment or engage a third party to do so;
  • Provide notice on website summarizing information such as the types of high-risk AI systems deployed and how it manages known or foreseeable risks of algorithmic discrimination as well as what and how information is collected and used;
  • Where deployers use a high-risk AI system to make/significantly influence a consequential decision about a consumer, they must:
    • Provide a statement covering the purpose of the system, the nature of the decision, and, if relevant, how a consumer can utilize their right to opt out of profiling under the Colorado Privacy Act; and
    • Offer the ability to Appeal: If the system makes an adverse consequential decision about a consumer, then the relevant information regarding that decision must be made available along with the opportunity to appeal. This appeal process must, if possible, offer the choice for a human review.
  • Notify the AG within 90 days if it discovers that a high-risk AI system has caused algorithmic discrimination; and
  • Notify consumers that they are interacting with an AI system (unless it would be obvious) — this applies to deployers or other developers that deploy or make available a consumer-targeted AI system.
Impact Assessments by Deployers

Deployers must conduct impact assessments annually, and within 90 days of each intentional and substantial modification of the high-risk AI system. They must retain records on assessments for at least three years from the date of deployment of the system. These assessments must include:

  • A statement disclosing the purpose, intended use cases, deployment context of, and benefits afforded by the high-risk AI system;
  • Analysis of whether the system poses known or reasonably foreseeable risks of algorithmic discrimination; and if so, the nature of the discrimination and the steps already taken to mitigate the risks;
  • Description of the categories of data the system processes as inputs and the outputs produced;
  • The metrics used to evaluate the systems performance and the known limitations of the system;
  • If data was used to customize the system, an over of the categories of the data used;
  • Metrics used to evaluate the performance and limitations of the system;
  • A description of the transparency measures used; and
  • A description of the post deployment monitoring and user safeguards, including how the deployer addresses issues arising from the deployment.
When Does CAIA NOT Apply?

The CAIA offers a large number of highly specific exemptions from its scope, although notably none are entity level. Instead, the CAIA offers exemptions largely based on if the use of the high-risk AI system is subject to other regulatory oversight or other similar law/regulation to the CAIA. Certain small business deployers are exempt from the requirements for risk management, impact assessment, and website disclosures.

Other notable exemptions include scenarios if the high-risk AI system has been approved/authorized/certified/cleared/etc. by certain federal agencies. Compliance with standards established by federal agencies is also an exempt scenario, so long as the federal standard is as stringent or more so than this law. There are also exemptions for use in product recalls and certain research scenarios.

How Will the CAIA Be Enforced?

The bill is enforceable solely by the Colorado AG, with no provision for private right of action. The AG has the authority to request developers and deployers to provide specific information related to their documentation.

Covered entities have an affirmative defense where they discover and remedy the violation, and is otherwise in compliance with the NIST Artificial Intelligence Risk Management Framework, another nationally or internationally recognized risk management framework for artificial intelligence, or a framework designated by the AG.

Note: The AG has the authority to create supporting regulations but is not obligated to do so under the law.

Data Privacy is Just Good Business