Discussions on AI and data privacy frequently include doom-and-gloom scenarios. Think: expansive regulations, steep regulatory fines, and scary Ex Machina futures where sentient technology takes over the world. Not to mention… data privacy. (Or lack thereof.)
While there are real concerns regarding AI and data privacy, that doesn’t mean your business can’t find ways to use AI to its advantage while staying ethical and improving its relationship with its customers.
So, let’s all take a collective breath. Inhale, exhale.
Today, we’re throwing away the “either/or” mentality and embracing “and.” You can build a strong data privacy program and include AI in your business strategy. You can ensure compliance and build a strong brand for your company.
Here’s a pragmatic look at how to build an AI-use strategy that accounts for data privacy.
Why do data privacy regulators care so much about AI in the first place?
There are many reasons why industries and their respective regulatory agencies see AI as such a jump-scare.
Just consider that…
1. AI technology is not known for its transparency.
One holdup that many businesses have with AI is the “black box” conundrum.
It’s not entirely clear how AI algorithms arrive at their conclusions. The decision-making processes happen in an opaque, black box. This means we can’t see what biases, inconsistencies, or logic drive the output of algorithms or generative AI models.
As a result, businesses that use these models may arrive at false conclusions without the ability to pinpoint the source of failure.
Another transparency issue: who exactly is getting this data? The regulatory waters quickly get muddied when sharing information with AI, which may involve cross-border data transfers.
2. AI models tend to mimic existing human biases.
Many data privacy laws protect individuals from profiling via AI regarding decisions that seriously affect them. For good reason.
It is well-reported that AI models and algorithms reflect our cultural biases, which makes sense. Models are trained on the internet, and… well, the internet contains multitudes, both good and bad. As a result, AI reflects our very human assumptions, which can lead to visibly skewed outputs.
Generally speaking, this is problematic, but what does it have to do with privacy?
Privacy regulations, starting with the EU’s General Data Protection Regulation and continuing all the way through the U.S.’s growing roster of state laws, offer protections for “special categories of data,” such as race, religion, sexual orientation, etc. Using AI systems that skew information based on special categories of data would lead to compliance run-ins (e.g., racially based algorithms could lead to credit discrimination, a violation of the Equal Credit Opportunity Act).
3. AI models use data provided to them to learn for future queries.
Large AI models don’t just take the information you give them and throw it away. They use it to learn and become better. Which is great for them… but not so much for consumers.
Suppose you share their personal data with an AI model without their consent, and that model can then use that consumer data moving forward. In that case, it prevents consumers from exercising their privacy rights as dictated by data privacy laws.
Stay in the clear by following practical industry best practices.
Data privacy laws are designed to give individuals control over how other entities gather and use their personal data.
To protect your business from these common and often well-founded concerns with AI, your best bet is to follow industry best practices and move forward with a balanced, practical approach. This should include:
- Creating a company AI policy
- Conducting AI risk assessments
- Assessing AI vendors
- Employing risk management practices
Create a company AI policy.
Much like a privacy policy, a company AI policy can serve multiple purposes. It can:
- Demonstrate compliance
- Provide guidance and clarity to employees on how they can or can’t use AI and how the outputs can or can’t be used
- Provide your legal team some peace of mind (you’re welcome)
- Support evaluating and managing vendors and third-party partnerships (AI tools should be reviewed just like all other tools)
What to include in an AI policy.
While you may be thinking, “No, absolutely not, we do not need another company policy,” hear us out.
An AI policy doesn’t have to be extensive. It can be straightforward, to the point, and incredibly helpful.
An AI policy should include the following:
- An overview of any applicable state privacy laws or AI laws, like the Utah Artificial Intelligence Policy Act (“AIPA”) or the EU AI Act
- Information regarding the data privacy training required for staff
- What use cases are AND are not acceptable for using AI algorithms or generative AI
- Where employees can go if they have any questions about using AI for company purposes
- The proper review channels to approve new use cases for AI, such as legal and HR teams
This information provides clear guidelines and transparency for your employees, creating a safe environment to use AI to make their jobs easier and your company more profitable while safeguarding the consumer trust you’ve worked so hard to earn.
How is your company using AI now?
To build an AI policy that stands the test of technology and legal regulations (e.g., the brand new Utah Artificial Intelligence Policy Act (“AIPA”)), you have to understand how your business currently employs AI.
Like a data inventory, an internal AI inventory can document how your company uses AI programs and how that model interacts with sensitive data.
- What algorithms does your company use?
- What generative AI models do your employees use?
- What data do you share with AI programs?
- What systems do AI algorithms have access to?
- What AI training does your company have in place?
- Do your vendors or third-party partners use AI programs that pull data shared with you?
This information can give you the background necessary to create an effective AI policy that adequately addresses any concerns, including data privacy.
Conduct AI risk assessments before adoption.
The number of available AI algorithms and large language models (aka ChatGPT-like systems) is growing by the day. To protect your company, create a thorough data privacy review system before adopting new models. This can protect your company from liability and save you from some serious headaches in the long run.
A company AI assessment is like any other privacy and security risk assessment:
- Identify applicable jurisdictions
- Engage relevant stakeholders
- Figure out your governance plan
- Create sustainable processes and policies
- Clarify potential risks
- Review on a regular basis
This, of course, is the short version. We have a more detailed conversation about how to approach AI risk assessments here.
Assess AI vendors used by your company
It’s a best practice to evaluate any new vendor for privacy practices. We’ve talked a lot about this in previous blogs, but an overview of the process looks thusly:
- Identify your vendors
- Establish internal governance
- Rank vendors according to risk
- Vet new vendors
- Rinse and repeat
In theory, AI vendors are vendors like any other… but the thing is, they’re not. AI vendors require additional scrutiny to ensure their practices align with your privacy and artificial intelligence policies. In addition to the steps above, you should ask questions like:
- What kind of data do they use for training and operation?
- How does the AI tool make decisions or predictions?
- What training do your staff make on privacy and security practices?
- What security protocols are put in place to prevent data breaches?
Does your current process support vetting AI vendors? A privacy consultant can help you evaluate how your processes can be adapted so you can thoroughly review AI tools.
Employ practical risk management techniques when using AI technology.
Even if you determine minimal risk for AI tools, these best practices can help minimize your exposure and help protect sensitive data.
1. Have a trial period for new AI tools.
Before adopting a new AI program or model, incorporate some ethical testing.
If it’s an image generator, test if certain job queries lead to racial or gender-based biases. If it’s a language model like ChatGPT, fact-check any information. If it’s a recruitment algorithm, compare its results to your team’s prior results.
2. Minimize the use of personal data in any AI algorithms.
The less personal data you give AI platforms, the lower your risk. In general, you can improve your data privacy program through:
- Data minimization (aka holding onto only what you need in the first place).
- Collecting the proper consent for any collected data
- Granting data access to as few people and applications as possible.
- Ensuring the data is processed according to the AI policy.
3. When in doubt, consult with an expert third party.
It’s better to be safe than sorry. If you’re considering potential AI applications, but want to protect your company from any liability (and might we say – great choice!), a second opinion by an industry expert can provide the perspective you need to move forward in your company’s best interest.
Have questions about the intersection of AI and data privacy? Schedule a consultation today.