You probably got to do some fun experiments in high school chemistry class. Maybe you made fiery whoosh bottles to learn about combustion reactions or crafted a Diet-Coke-and-Mentos volcano

But science isn’t always an exercise in explosions. You also learn about the safety precautions necessary for a productive experiment, like wearing goggles and gloves. 

It’s a good lesson: you can do cool things as long as you prioritize safety. 

Using AI in your business is like recreating that high school chemistry class. With so many use cases, you can do incredible things to propel your business forward. But first, it’s critical to understand the precautions to protect your business, employees, and customers.

So, how can your business leverage AI without setting off fire alarms? 

Understanding AI from a privacy perspective

When we talk about AI regulations today, we typically refer to machine learning (ML). ML is AI that enables a system to learn and improve based on data and feedback—think Generative AI like ChatGPT. 

The more data you give the system, the better the system becomes.

However, data privacy laws dictate that consumers have the right to control how their information is collected, accessed, stored, used, shared, and managed. 

The current regulatory landscape

The U.S. doesn’t have a modern, overarching data privacy or AI regulation. Instead, different states have passed their own AI and data privacy laws, which can make things tricky for businesses to navigate.

So far, Utah and Colorado have passed Artificial Intelligence Acts that expand state consumer protection regulations, including data privacy. However, other existing privacy laws, like California’s Consumer Privacy Act, have expanded definitions of personal information to cover AI.  

The new EU AI Act

The EU’s AI Act was approved in May 2024 and will come into effect over the next few years. 

Under the EU AI Act, businesses that gather data from EU residents will have added compliance measurements beyond those imposed by the General Data Protection Regulation (GDPR), including different activity rules depending on their risk level.

For example, for certain products like children’s toys and medical devices, businesses will be required to complete privacy impact assessments (PIAs) before they go to market. 

Remember that if your business is found to violate consumer privacy laws, whether it’s for the EU AI Act, the GDPR, or U.S. state regulations, you could face serious financial penalties

Five practical steps for managing AI privacy risks

Let’s discuss practical measures you can take to fully utilize AI while managing and mitigating data privacy risks. (You can have your Bunsen burner and avoid combustible accidents, too.) 

1. Data governance and AI

Create a cross-departmental steering committee to develop clear policies on AI use cases and ensure data governance

Potential topics and issues to evaluate include:

Third-party access and usage:

  • Which third parties have access to the data, and what level of access do they have?
  • Will third parties be involved in training the AI model? If so, how will their access be limited?

AI use and sensitive data:

  • How will the AI be used within the business, and will it involve sensitive data?
  • How do you ensure data quality and integrity for accurate and reliable AI outcomes?

Operational and risk management:

  • How can risk for AI-related issues be mitigated and negative impacts on the business prevented?
  • How will access control policies be implemented to ensure proper data handling?
  • What are the operational requirements to enable efficient and sustainable AI use?
  • What steps are necessary to meet compliance obligations and avoid legal and ethical issues?

It’s also important to identify a data governance framework. The NIST’s Data Privacy Framework or the University of Toronto’s VALID-AI questions are a great place to start. 

2. Privacy impact assessments for AI

Many U.S. state laws require PIAs when companies change processes, products, or services. But while they are (sometimes) a regulatory requirement, they’re always a good step to help companies manage privacy risks—and they’re especially great for businesses looking to manage AI well. 

(In fact, incorporating AI tools may be enough to trigger a PIA obligation. Not sure? Ask your friendly privacy expert!)

PIAs provide more than just compliance; they allow businesses to dig into deeper concerns like bias, ethics, discrimination, and data inferences that are often made by AI models. These assessments help you make sure you’re using AI tools within legal frameworks and customer expectations, keeping you on the right side of privacy and consumer trust.

Key areas to evaluate during a PIA include: 

Bias, ethics, and discrimination

  • How does the AI system deal with bias and prevent discriminatory outcomes?
  • What inferences does the AI make? Are they ethical or fair?

Privacy risks and compliance

  • What privacy risks are involved? How do they comply with existing and new laws? How do they line up with consumer expectations?
  • Does the AI comply with emerging AI-specific laws, such as the EU AI Act?

Oversight and transparency

  • How will your business maintain oversight while preventing faulty outputs and mitigating risks?
  • How transparent is the AI model in its decision-making? How does it address consumer concerns about model explainability?

Even if you’ve conducted PIAs in the past, review your processes and see what might need to be updated to integrate AI into your assessment process. While you may be able to wrap PIAs into established procedures, you may want to conduct AI-specific ones. 

3. Train your employees on transparency and explainability

Whatever policy and procedures you create, you want to ensure your employees understand their parameters and why they exist. 

It’s not enough to toss new AI policies into the grab bag of your annual training. Create ongoing training programs for employees and stakeholders to increase awareness of AI, how it intersects with privacy, and (most importantly for them) how it impacts their jobs.

Training can feel like a lot of work, but it has many benefits. Clear policies and consistent training can encourage employees to share new AI use case ideas or elevate issues. 

Remember too that privacy notices should include provisions about how AI is used. If appropriate, consider creating a separate AI disclosure. 

Transparent data privacy and AI policies are also huge opportunities to build trust with your consumers, which increases your bottom line and gives you a competitive edge in the market. Don’t ruin it with impossible-to-understand policies and procedures. Keep it simple.

4. Consider data rights challenges posed by AI     

If you ever made slime during a classroom (or, let’s be honest, home) chemistry experiment, you know that once you mix some things together, they’re hard to extra. 

So it goes with AI and data. And that can create issues when it comes to upholding privacy rights once data is embedded in a model.

Businesses need to think about how they can uphold data subject rights when information is part of an AI system.

Key questions you should be asking about how to manage data subject rights in AI:

  • Are we transparent about AI data processing in our privacy notices, and do we provide explanations for automated decisions?
  • Do we have systems in place to track and isolate personal data in AI models, and are we using privacy-enhancing technologies?
  • How will we handle requests for access, portability, rectification, erasure, and processing restrictions?
  • Do our AI systems allow for human intervention, and can individuals contest or understand decisions made by AI?
  • Are we minimizing data collection and ensuring it is used only for specified purposes?
  • Are our AI vendors compliant with data subject rights, and do our contracts obligate them to assist with rights requests?

5. Third-party AI vendor management

Like most businesses in today’s landscape, you likely use a range of vendors to support your operations. Your privacy program’s third-party risk management strategy should already account for these vendors. 

Your AI tools should, too (no surprise). 

And another unsurprising thing: your AI vendors bring extra considerations like: 

  • Where does the vendor get their AI model?
  • What kind of data is used to train the AI?
  • How often do they refresh their data?
  • Is any personal data involved in training, and if so, is it anonymized or de-identified?
  • What steps do they take to stop personal data from being re-identified?
  • How do they handle issues like bias, inaccuracies, or underrepresentation in the AI’s outputs?

Third-party risk mitigation, like monitoring the ongoing compliance of your AI vendors, can prevent many headaches down the road.

Downloadable Resource

AI Governance Roadmap: Business Guide

Stay on top of your AI program

AI privacy management isn’t a set-it-and-forget-it program. But with a clear, practical governance plan in place, you can minimize the burden of program management and maximize your advantages. 

To manage your AI program with confidence:

  • Identify a clear owner or decision maker around AI usage at your organization to ensure accountability. 
  • Collaborate across IT, legal, compliance, marketing, and HR departments to create a program that works for everyone.
  • Leverage work that has already been done within your privacy program. Take advantage of existing processes like PIAs, data inventories, and third-party risk management to get AI management up to speed quickly. (And avoid unnecessary work!)   
  • Stay informed about evolving AI regulations and industry best practices on social media, podcasts, or newsletters (like the one you can sign up for below). 

Third-party experts can help create practical AI programs that boost your business and protect your customers at the same time. To learn more, schedule a consultation today.