Colorado Artificial Intelligence Act

What you need to know about the CAIA:

To What Entities Does the CO AI Act Apply?

The CAIA applies to developers and deployers of “high-risk AI systems.”

Deployer: a person/organization doing business in CO that deploys a high-risk AI system.

  • Think: insurance company who buys and uses an AI system to determine rates for applicant customers

Developer: a person/organization doing business in CO that develops or intentionally and substantially modifies an AI system. 

  • Think: a company that either builds its own or significantly modifies an available AI system.
To What Technologies Does CO AI Act Apply?

A high-risk AI system is any AI system that, when deployed, makes, or is a substantial factor in making, a decision that has a material, legal, or similarly significant effect on the provision, denial, or cost of the following things to a consumer:

  • Educational enrollment/opportunity;
  • Employment/employment opportunity;
  • Financial/lending service;
  • Essential government service;
  • Health-care services;
  • Housing;
  • Insurance; or
  • Legal services
What Constitutes AI under the CO AI Act?

“Artificial intelligence system” means any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.

When Does CO AI Act NOT Apply?

The Colorado AI Act (CAIA) includes several key exemptions to avoid regulatory overlap and support legitimate uses of AI:

  • Small Deployers: Entities with fewer than 50 employees are exempt if they use AI systems as intended and do not apply custom training data.
  • Federally Regulated Systems: AI systems approved or overseen by federal agencies—such as the FDA or FAA—are excluded.
  • Healthcare: AI systems used for healthcare recommendations are exempt if they are covered under HIPAA or governed by federal laws that offer equal or greater protections.
  • Federal Law Preemption: Any AI system governed by a federal law that is equal to or stricter than CAIA is not subject to the Act.
  • Permitted Activities: Activities such as legal compliance, law enforcement cooperation, scientific research, security incident response, and pre-deployment AI development are also exempt.
  • Financial Institutions: Entities subject to state or federal financial examination are excluded.

These exemptions are designed to prevent duplicative regulation and to support innovation, research, and lawful operations. 

The CAIA excludes certain technologies from being classified as high-risk AI systems, such as: 

  • Anti-fraud technology not using facial recognition 
  • Anti-malware and anti-virus software 
  • AI-enabled video games  
  • Cybersecurity tools  
  • Databases and data storage  
  • Spam and robocall filtering  
  • Web caching and hosting  
  • Natural language communication tools that provide information, referrals, or recommendations, provided they adhere to acceptable use policies prohibiting discriminatory or harmful content 

Key Components of Colorado’s Data Privacy Law

What Is Prohibited?
Algorithmic Discrimination:

The CAIA prohibits the use of AI systems that result in unlawful differential treatment or impact based on protected classes such as race, disability, age, gender, religion, veteran status, and genetic information.

Transparency Obligations
Developers:

Must publish a clear, regularly updated statement on their website summarizing the types of high-risk AI systems they have developed or substantially modified, and how they manage known or reasonably foreseeable risks of algorithmic discrimination.

Deployers:

Consumer Notifications: Requirements are dependent on whether AI systems are high-risk. Deployers who offer any AI system that interacts with consumers—regardless of whether the system is high-risk—must inform consumers that they are interacting with AI, unless it is obvious to a reasonable person. 

High-risk AI systems have additional transparency obligations:
 

Public Disclosures: Before using a high-risk AI system to make—or be a significant factor in making—a consequential decision about a consumer, deployers must provide the consumer with a notice stating:  

  • A description of the high-risk AI system and its purpose 
  • The nature of the consequential decision being made 
  • The deployer’s contact information 
  • Instructions for accessing the required website disclosure: 
    • Types of high-risk AI systems currently deployed
    • Management of known or reasonably foreseeable risks of algorithmic discrimination 
    • Nature, source, and extent of information collected and used by the AI system
  • Information about the consumer’s right to opt out of personal information processing for profiling 

Post Adverse Decision Disclosures: When a high-risk AI system contributes to an adverse consequential decision, provide the consumer with: 

  • The principal reason(s) for the decision, including the AI system’s contribution
  • Types and sources of data used
  • Opportunities to correct incorrect personal data and appeal the decision with human review  
Minimization Obligations

No strict minimization obligations.

Accountability Obligations
Developers:  

Documentation: Provide deployers with comprehensive documentation, including: 

  • Intended uses and known harmful or inappropriate uses of the AI system
  • Summaries of training data types
  • Known limitations and risks of algorithmic discrimination
  • Evaluation methods for performance and mitigation strategies
  • Data governance measures and intended outputs
  • Guidance on appropriate use and monitoring 
Deployers:  

Risk Management Program: Implement and maintain a risk management policy and program aligned with recognized standards such as the NIST AI Risk Management Framework or ISO/IEC 42001. This program should be regularly reviewed and updated to identify, document, and mitigate risks of algorithmic discrimination.  

AI Impact Assessments
Developers:

No impact assessment obligation

Deployers:

Impact Assessments: Conduct impact assessments within 90 days of the Act’s effective date, annually thereafter, and within 90 days of any substantial modifications to high-risk AI systems.

 

Assessments must include:
  • Purpose and intended use cases
  • Analysis of risks of algorithmic discrimination and mitigation steps
  • Data categories processed and outputs produced
  • Performance metrics and known limitations
  • Transparency measures and post-deployment monitoring.
Reporting Obligations
Developers:

Incident Reporting: Disclose any known or reasonably foreseeable risks of algorithmic discrimination to the Colorado Attorney General and all known deployers or other developers within 90 days of discovery. ​

Deployers:

Incident Reporting: Notify the Colorado Attorney General within 90 days upon discovering that a high-risk AI system has caused or is reasonably likely to have caused algorithmic discrimination.

How Will the CAIA Be Enforced?

The Colorado Attorney General holds exclusive enforcement authority under the CAIA. Violations are considered deceptive trade practices under the state’s Consumer Protection Act.  

Note: The AG has the authority to create supporting regulations but is not obligated to do so under the law. 

Data Privacy is Just Good Business