Deploying AI tools and features without conducting risk assessments used to be like skipping the safety inspection on your car—inadvisable, but not illegal in most states. That’s changing quickly.

While companies debated whether AI risks were worth the effort, regulators are now deciding for them. Ready or not, AI’s era of “wait and see” is over, and the era of “show us the inspection report” has officially begun.

What is an AI risk assessment?

From using chatbots that can adjust their tone based on a user’s emotions to deploying a slate of hyper-realistic virtual influencers, AI is grabbing the attention of companies looking to gain a new competitive advantage.

But deploying new AI tools is often matched by the seriousness of the risks they carry.

An AI risk assessment is a process companies use to understand how using AI might impact their customers, employees, clients, and, in many cases, society as a whole. A risk assessment focuses on safety, fairness, privacy, security, and compliance with local regulations.

A review often:

  • Spots red flags proactively: Identifies areas where an AI could go rogue due to discriminatory bias or security flaws.
  • Evaluates potential risks: Evaluates potential impact, how severe it might be, and how likely it is to happen.
  • Examines data inputs and outputs: Reviews data sources for privacy, security, ethics, quality, and transparency, including understanding what data goes in and what comes out.
  • Checks for decision-making transparency: Verifies that you’re able to explain a system’s decisions and aren’t following its recommendations blindly.
  • Identifies required disclosures: Determines what notifications, privacy notice updates, or separate disclosures are required by law.
  • Assesses vendor relationships: Includes deep-dive evaluations of third-party AI systems and ensures contracts include appropriate protections.
  • Identifies needed controls and safeguards: Determines what bias audits, impact testing, access controls, disclosures, and safety mechanisms may be required based on identified risks.
  • Creates formal documentation: Keeps clear records of assessments, decisions, and controls so the information is always ready for regulatory review. This includes your systems, as well as those you use from third-party vendors.

Just as speed limits and passing rules can vary from state to state, jurisdictions are taking AI oversight into their own hands.

You might think an out-of-state license plate could help talk your way out of a speeding ticket, but that logic doesn’t fly when it comes to regulators. By 2026, Gartner predicts that half of global governments will enact laws that require corporations to use AI responsibly. New laws are cropping up around the globe, and they don’t just ask organizations to follow them—in many cases, you will have to prove it.

When are AI risk assessments going from optional to mandatory?

2026! You can see the timeline for AI compliance isn’t casual. What some organizations viewed as a best practice is now on the fast track to becoming mandatory. We watched privacy programs evolve from what many viewed as an in-house vanity project to full-blown legislation. AI safety regulations are trending similarly.

As of this writing, there are a few significant deadlines barreling down the AI compliance highway:

In the United States:

  • Colorado’s AI Act takes effect on February 1, 2026, introducing mandatory annual impact assessments for high-risk AI.
  • California’s CCPA regulations on automated decision-making technology are moving through the final stages of rulemaking, with the California Privacy Protection Agency having completed public comment periods through June 2025.
  • New York City’s Local Law 144 requires bias audits for automated employment decision tools used by employers and employment agencies within NYC.

Internationally:

  • The EU AI Act has already established four tiers of AI risk (see below) and requires assessments for certain high-risk AI systems, with various compliance deadlines rolling out through 2025-2027.

And more are on the way. What’s clear here is this: AI risk assessment isn’t optional, and the clock is ticking.

Understanding AI exposure risks as a business

When your industry deals with high volumes of paperwork—think university students or health insurers—it can feel tempting to offload much of the data sorting tasks to AI. However, these types of functions are the most vulnerable to algorithmic bias, making them a scrutiny magnet for regulators.

Algorithmic bias happens when AI systems develop prejudices—then use those assumptions to make decisions about human users. What appears to be a neutral, data-driven process becomes automated unfairness. It’s not just a technical glitch; it’s a risk to fairness, reputation, and compliance.

Bear in mind, AI risk doesn’t only apply to your front-facing core offering. Your backend systems can also be reviewed. Systems used to vet employees, loan applicants, incoming students, or individuals seeking a professional license are the most audit-prone.

However, many businesses use AI tools that aren’t sensational newsmakers, but still fall under the category of high-risk AI tools.

What is a high-risk AI tool?

If you’re deploying AI tools and features, think of risk assessments as essential maintenance. You can do them upfront to prevent problems, or you can scramble to figure out what went wrong after the fact. (But most businesses prefer the proactive approach.)

Risk assessments become legally mandatory when your AI falls into “high-risk” categories. The EU AI Act provides a helpful framework with four risk tiers:

Minimal riskSpam filters, grammar checkers
Limited riskFAQ chatbots, product recommendation tools
High-riskWorkplace hiring or termination decisioning, credit scoring algorithms
Unacceptable riskSocial scoring by governments, predictive policing

However, in the US, state privacy laws are (as it has generally unfolded with privacy here) variable between jurisdictions. Currently, there are only two states that define a “high-risk” AI tool:

  • Colorado: AI tools that make “consequential decisions.”
  • Texas: AI systems involving decisions relating to employment, finance, education, healthcare services, housing, and insurance (from the recently enacted Texas Responsible AI Governance Act).

Doing an AI risk assessment shouldn’t totally hinge on risk level, though. That friendly chatbot that answers FAQ questions about making appointments at your dental clinic? If it starts giving incorrect medical advice, leaking customer data, or responding inappropriately to sensitive topics, you’re looking at liability issues, regulatory scrutiny, and serious damage to your brand, regardless of whether the law requires an assessment.

How to start your first AI risk assessment

If you’re not sure if AI plans for your data meet the definition of high risk—don’t worry, you’re not alone.

Most companies recognize the need to do this, but they’re staring at a blank page, wondering where to begin. Here’s the step-by-step approach we use with clients:

Start with your AI inventory

Before you can assess risk, you need to know what you’re working with. If you’ve already completed a privacy data inventory—which documents the tools your company uses and the data being processed—you’re ahead of the game. If you haven’t done one yet, this is your wake-up call to get started.

Begin with the obvious AI tools (chatbots, recommendation engines), then dig into the hidden ones embedded in your vendor software. That HR screening tool? The automated underwriting system? The customer service routing algorithm? They all count.

For each system, document: What data was input into the system? What does it do? What decisions does it make? What data does it use? Who has access to it? And finally, what’s the end result or output? Ask teams what they are using to make sure you’re capturing shadow AI tools or features that are used but weren’t part of the original use case.

Assess the decision impact

Ask yourself: If this AI system made a wrong decision, would someone be harmed or put at risk of discrimination? Would their privacy rights and freedoms be impacted? These can be hard questions to answer, but take the time and take it seriously: a grammar checker making a mistake is annoying, but an AI system denying someone a loan or a job application is life-changing.

Use this simple test: Does your AI influence employment, lending, healthcare, housing, education, or government services? Does it impact minors? If yes, you’re looking at high-risk territory. (This isn’t an exhaustive list, but it’s a good place to start.)

Examine your data sources

If you’re using customer service records from 2015 to train an AI that routes support requests, you might be perpetuating outdated assumptions about customer behavior.

AI systems are only as good as their training data. Look at what’s feeding your AI:

  • Is the data current?
  • Is it representative?
  • Is it free from historical bias?
  • Can you even use it?
  • Do you have the right disclosures? 

The disclosure issue can be a sticky one; you may need to update your privacy notice or create additional disclosures. Another point to consider: you may need to update terms and contracts to determine whether it can be used in the first place. In short: there’s a lot of digging that might need to do.

Test for bias and fairness

Start basic: Run the same scenarios through your AI with different demographic inputs. Does a loan application get different treatment based on zip code? Does your hiring AI score identical resumes differently based on name patterns?

This is just the tip of the iceberg, though. The more consequential your AI’s decisions, the more thorough and in-depth your testing should be.

Document everything

Keep detailed records of your assessment process, findings, and any changes you make. This creates a paper trail that demonstrates that AI risks are taken seriously and thoughtful decisions were made.

As a side note, regulations like the EU AI Act and the Colorado AI Act have specific requirements for how you need to document high-risk AI usage. Working with a privacy consultant can help you understand better what applies to you and your AI use case.

Build monitoring checkpoints

AI systems drift over time as they process new data. Set up regular reviews, quarterly for high-risk systems and annually for lower-risk ones. However, you should also run risk assessments anytime you introduce a new product, service, or vendor. Additionally, conduct assessments when:

  • Regulations change
  • Business expands to new jurisdictions
  • AI models are updated or used differently
  • Security incidents occur, or there is poor user feedback

If you’ve done privacy impact assessments before, you already understand this methodology. We’re applying the same systematic thinking to AI systems: identifying risks, documenting findings, implementing safeguards, and monitoring ongoing performance.

Why start now instead of waiting

Ready to start your AI risk assessment? We’ve helped hundreds of companies develop systematic approaches to privacy risk, and now we’re applying the same methodology to AI systems with comprehensive AI governance support.

Schedule a consultation to learn how to conduct thorough AI risk assessments that protect your business and your customers.

Downloadable Resource

AI Governance Roadmap: Business Guide