Imagine you need money, and you’re trying to figure out where to get it. 

Well, banks have money. And a lot of it is just sitting there gathering dust (and maybe some interest).

Now, in this hypothetical, you don’t know that taking money that isn’t yours is against the law. You just think you can take the money, and it will be fine. 

But when you get caught in the middle of your robbery, you try to argue in your defense, “I didn’t know it was illegal!” 

You would be laughed all the way to jail.

Ignorance of the law sounds ridiculous when we talk about stealing from a bank. Just because you don’t know it’s against the rules doesn’t mean you’re exempt from the consequences of your actions.

However, with the use of AI, many of the “rules,” from laws to regulatory frameworks, are very new or still in development. Even though these regulations are new, businesses are still responsible for understanding which requirements apply to them. 

So, let’s examine four up-and-coming regulatory approaches to AI and what you need to know to protect your business.

The European Union’s AI Act can apply to American businesses

The EU’s AI Act was recently passed into law, but it’s been in the works since 2021. As with the EU’s General Data Protection Regulations (GDPR), American-based companies may still be subject to the EU’s new AI Act if they market to EU residents or provide AI-based products and services to EU residents. 

And the penalties are steep: fines can be as high as €35 million, or 7% of global revenue.

What you need to know about the EU’s AI Act

The AI Act was developed to establish a comprehensive legal framework on AI worldwide, with a focus on protecting individuals’ rights, as well as other safety concerns and ethics. 

The EU’s AI Act takes a risk-based approach to AI systems, with a four-tiered analysis of AI systems based on the type of data that the AI uses and the AI’s use case.   

The four tiers are:

  • Minimal risk: AI systems are considered to be those that don’t pose risks to an individual’s safety or health and can be broadly used across sectors with limited compliance obligations. While these applications are free from stringent regulatory oversight, developers are still encouraged to adhere to best practices and voluntary standards
    • Examples: Spam filters or AI-driven video games
  • Limited risk: AI usage under this category may pose some risks, particularly if there isn’t transparency with the end-user. Companies using AI that is considered “limited risk” must inform users they’re interacting with an AI system to ensure users can make informed decisions about interacting with it.  
    • Examples: AI applications like chatbots or AI-generated content where potential risk to safety or rights is lower (but still present). 
  • High risk: AI usage under this category is considered to pose a significant risk to health, safety, or fundamental rights. If businesses use AI that is considered “high risk,” they must meet strict compliance requirements, including thorough testing and certification, data governance, and user transparency. 
    • Examples: This category encompasses a wide range of applications, including employee evaluation systems, recruitment technology, biometric identification systems, and educational evaluation systems. 
  • Unacceptable risk: AI practices ranked as an unacceptable risk are strictly prohibited, including deceptive or manipulative business practices or using biometric data to categorize individuals based on factors such as their race, sexual orientation, or political affiliation. 
    • Examples: Social scoring systems that could lead to discrimination or real-time biometric identification systems used in public spaces without appropriate safeguards. 

For businesses in danger of falling under high-risk restrictions, keep in mind that this categorization can result from a lack of documentation or assessment of AI activities (and an abundance of caution on the EU’s part). 

This means that the more you clarify, document, and assess your AI uses, the likelier you are to assuage concerns and get slapped with a high-risk label. This label requires businesses to:

  • Register with the EU 
  • Have a quality management system
  • Maintain adequate documentation 
  • Undergo certain assessments
  • Comply with any restrictions 
  • Produce documentation of regulatory compliance upon request

Other notes to take include:

  • If an AI system is intended to directly interact with individuals, it must be clearly marked as an AI tool.
  • AI models with “high-impact capabilities” may be subject to additional restrictions.

When does the EU AI Act come into effect?

The AI Act is expected to enter into force between May and July 2024. 

The different provisions within the act will take effect at certain milestones after it enters into force. For example, the banning of AI practices labeled as unacceptable risk will take effect six months after the AI Act enters into force.

The Utah Artificial Intelligence Policy Act: the first of its kind in the US (kind of)

Utah enacted the Utah Artificial Intelligence Policy Act (UAIP) in March 2024. The law went into effect on May 1, 2024, in a relatively quick turnaround. 

Under the UAIP, businesses using generative AI must “clearly and conspicuously disclose” that when a consumer is interacting with AI, not a human. These interactions can take place via text, audio, or visual communication. 

However, disclosure requirements only apply if a consumer interacts with generative AI prompts OR asks the AI to disclose whether they’re interacting with a human. If they don’t ask, businesses don’t have to say anything. 

On the other hand, individuals or businesses working in “regulated occupations” must disclose that a consumer is interacting with generative AI or materials created by generative AI. These disclosures should be: 

  1. Provided prior to AI interaction
  2. Prominently displayed
  3. Provided verbally during oral exchanges and in writing for written communications

(Note that the “regulated occupations” category is broad, encompassing everything from accountants to veterinary medicine.)

But don’t think that Utah is shifting from its business-friendly posture; as part of the legislation, they’ve established the Office of Artificial Intelligence Policy and a learning laboratory to study both the risks and benefits of AI and recommend potential regulatory frameworks. The goal: AI innovation done safely. 

Businesses and AI under the UAIP: “It’s not you, it’s me”

The parties surrounding UAIP have been clear that they don’t want the law to discourage innovation, and as such, the penalties for violations are softer than those of the EU AI Act. 

To start, UAIP doesn’t allow for private right of action, meaning individuals can’t file claims against businesses under the law’s provisions. However, the Utah Division of Consumer Protection may assess an administrative fine of up to $2,500 per violation and the Utah Attorney General may also seek up to $5,000 per violation. 

Interestingly, under the UAIP, a company that violates the UAIP can’t blame the violation on the generative AI tool itself. If the generative AI tool made a statement or completed an act in violation of the state law, it’s still the business’s responsibility. 

Utah and other state AI laws

While Utah is the first U.S. state to pass an AI-specific law, it’s far from the only regulation to address AI. The 2023 Colorado Privacy Act, for example, included provisions to allow consumers to opt out of automated decisions and required assessments of high-risk activities. 

And more are coming down the pike: California (unsurprisingly) is working hard to pass an AI bill. Other states, like Connecticut, Vermont, Hawaii, Illinois, New York, and Rhode Island, have recently introduced legislation. 

The U.S. Federal Trade Commission is focused on AI fraud

Deep fakes. They range from laughable (Back to the Future/Spiderman mashups) to deeply worrying acts (identity threats, misinformation, cybersecurity, and more)—and they’re on the rise. 

The number of deepfake videos, for example, increased by 550% between 2019 and 2023, according to a study by Home Security Heroes, a research group focusing on online security.

The proliferation of deepfakes and other forms of AI impersonation and fraud is prompting the Federal Trade Commission (FTC) to take action. 

As of February 2024, the FTC proposes protections to help combat AI impersonations. This action supplements the FTC’s newly finalized Government and Business Impersonation Rule, designed to combat scammers impersonating businesses and government agencies. 

This move will help discourage fraud and secure compensation more effectively for businesses, government agencies, and consumers alike. 

Businesses can use the NIST AI RMF to help navigate AI risk management.

The National Institute of Standards of Technology (NIST) is part of the U.S. Department of Commerce. Its AI Risk Management Framework (AI RMF) is a voluntary framework that helps build trust and transparency in the design, use, and development of AI-related products, services, and systems. 

The AI RMF is meant to be adaptable to all types of businesses. First, it provides a roadmap for identifying risk in the context of AI, which is helpful for any business. It also offers a set of processes and activities that companies can use to assess and manage risk related to AI systems. 

The processes fall under four core functions:

  • To govern
  • To map
  • To measure
  • To manage

For businesses looking for a guide to AI risk assessment and management, the NIST also released a “playbook” dashboard that provides information and guidance on addressing the processes above. 

While not required, the AI RMF is an important tool for helping businesses navigate the risks associated with using AI tools. Following the framework would also assist businesses that have to abide by the EU’s AI Act or the UAIP.

How to blend AI RMF with privacy programs

Frameworks are great, but how can you put AI RMF into actual practice at your business? There are a few steps (familiar) steps to take: 

  • Understand and map AI risks: Identify and document the risks associated with AI systems, including privacy concerns related to data collection and processing. What are the legal and regulatory requirements and how do AI risks relate to them? 
  • Develop and implement policies: Establish sustainable policies, processes, and procedures to address privacy-related AI risks. 
  • Measure and manage risks: Develop business processes that measure AI risks, like bias, false positives, and unintended uses of AI systems. Establish controls that will mitigate risk. 
  • Governance and accountability: Who will be in charge of monitoring and managing AI risks? How often will risks be reviewed? What decision-making processes need to be put into place?  
  • Documentation and compliance: Document AI risk management activities, making sure they align with compliance with existing and emerging regulations.  
  • Training and awareness: Build ongoing training programs for AI teams and other stakeholders to increase awareness of AI risks and the importance of privacy protection. 

If you recognize these steps, congratulations! You’ve been reading Red Clover’s blog for a while. We talk frequently about how these steps are a part of a strong privacy program—and because AI governance often overlaps with privacy teams, there is a lot that can be borrowed. 

Need more guidance on AI governance? We’ve got you covered here.

Stay up-to-date on the latest AI and consumer data protection regulations

Privacy regulations move quickly, but AI technology and regulations are developing even faster. Staying in step with them is becoming a business imperative—not just to optimize your workflows and increase profitability, but to avoid making the kind of missteps that can damage consumer trust and lead to costly compliance violations. 

Keep your finger on the pulse of consumer data privacy protections with Red Clover Advisors. Subscribe to our newsletter below, or schedule a call to see how you can build data privacy compliance for your business.