EU Artificial Intelligence Act
The European Union Artificial Intelligence Act (EU AI Act) is a comprehensive legal framework designed to regulate artificial intelligence systems within the European Union. The Act, which is poised to become a pivotal component of EU digital regulation, was adopted in May of 2024 and will be fully applicable 24 months later, though it has rolling effective dates for different provisions. Its primary aim is to ensure that AI systems used in the EU are safe, ethical, and respect fundamental rights. The EU AI Act is extremely comprehensive and detailed; this overview seeks to address some notable and interesting elements. This is not intended to be a comprehensive overview.
What you need to know about the EU AI Act:
The EU AI Act applies to entities that provide (provider), import (importer), distribute (distributor) or deploy (deployer) AI systems in the EU. All parties involved in the creation, usage, importation, distribution, or development of AI systems are covered in some way.
- Provider: An entity that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge (within or outside the EU).
- Deployer: An entity using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity (within the EU).
- Importer: A natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country.
- Distributor: A natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the EU market.
Key Components of the EU AI Act
The EU AI Act defines four levels of risk and assigns rules around each risk level:
- Unacceptable risk (banned practices)
- High-risk (limited by regulation)
- limited risk (very little regulation, transparency required)
- Minimal risk (no regulation)
The EU AI Act is largely concerned with unacceptable risk and high-risk AI systems. The law bans systems with unacceptable risk and has obligations for providers and deployers of high-risk AI systems, while low-risk AI systems are relatively unregulated.
Unacceptable risk:
Systems that represent an unacceptable risk, and are banned in the EU, are those that infringe on the fundamental rights of individuals through
Examples include:
- Subliminal and Manipulative Techniques:
- Prohibited: AI systems using subliminal, manipulative, or deceptive techniques that distort behavior and impair decision-making, causing significant harm.
- Exploitation of Vulnerabilities:
- Prohibited: AI systems exploiting vulnerabilities due to age, disability, or socio-economic status, causing significant harm.
- Social Scoring:
- Prohibited: AI systems evaluating or classifying individuals based on social behavior or personality traits for social scoring.
- Predictive Criminal Risk Assessments:
- Prohibited: AI systems assessing or predicting the risk of criminal offenses based solely on profiling or personality traits, unless supporting human assessment with verifiable facts directly linked to a criminal activity.
- Facial Recognition Databases:
- Prohibited: AI systems creating or expanding facial recognition databases through untargeted scraping of internet or CCTV images.
- Emotion Inference in Sensitive Areas:
- Prohibited: AI systems inferring emotions in workplaces and educational institutions, except for medical or safety reasons.
- Biometric Categorization:
- Prohibited: AI systems categorizing individuals based on biometric data to infer sensitive attributes (e.g., race, political opinions, religious beliefs), except for lawful data handling in law enforcement.
- Real-Time Remote Biometric Identification for Law Enforcement:
- Prohibited: Use in public spaces for law enforcement, with limited exceptions.
High risk:
High-risk systems that require companies to implement compliance mechanisms are those that are either:
- Intended to be used as a safety component of a product (or is itself a product) and are subject to established third-party assessments; or
- Other applications (listed in Annex III) that may implicate fundamental rights. These systems are subject to additional requirements under the Act. Use cases include AI applications in education, biometrics, employment, critical infrastructure, law enforcement, justice, migration, and essential services. However, there are exceptions for AI used in biometric identity verification, financial fraud detection, and political campaign organization.
Limited risk:
These are risks arising from a lack of transparency (risking manipulation or deceit of end users) in the usage of AI. Organizations must engage in transparent, honest, uses of AI, ensuring that humans are informed. AI generate content must be identifiable, and humans must be aware whenever interacting with chatbots. Certain types of AI generated text must be labeled as artificial to be able to be published.
No/Minimal risk:
These are those that do not fall into the above categories. A common example are video game AI and spam filters. The Act recommends utilizing voluntary codes of conduct to manage any risk.
Risk Management and Accountability:
Providers must institute a risk management structure/system that enables them to identify and analyze risks from their use of AI systems, particularly high-risk AI systems. This includes keeping logs on high-risk AI systems; ensuring data sets are accurate and representative; ensuring technical documentation is current; and providing for human oversight. Additionally, providers have a duty to handle significant incidents involving their AI systems, including recalling, disabling, or withdrawing said system.
Providers based outside the EU must nominate an authorized representative to perform certain tasks.
Transparency:
Providers must design all AI systems to ensure deployers can interpret and use the output appropriately. High-risk AI systems must come with accurate instructions that enable deployers to understand, use and maintain the system as well as properly interpret outputs, and make decisions and intervene in its operation.
- The identity and contact details of the provider and, where applicable, its authorized representative;
- The characteristics, capabilities and limitations of the system’s performance;
- Any changes to the system and/or its performance which have been predetermined by the provider at the moment of the initial conformity assessment;
- Human oversight measures, including the technical measures to facilitate the interpretation of the outputs;
- Computational and hardware resources needed, the expected lifetime of the system and any necessary maintenance and care measures, including software updates;
- Where relevant, a description of the mechanisms included within the system that allows deployers to properly collect, store, and interpret technical logs built into the systems.
Training:
Providers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.
Assessments:
When working with certain types of high-risk AI systems, a fundamental rights impact assessment is required. Additionally, assessments to verify and certify that an AI system is compliance with this act may also be required.
Risk Management:
Deployers must provide human oversight of the AI systems they deploy and review input data to ensure its relevance and representativeness related to the purpose of the AI system. They also must monitor the system after it has been deployed into the market, among other obligations.
Transparency:
Entities deploying AI systems in employment settings must inform employees of the use of AI prior to implementation. Additionally, where high-risk AI systems are used for certain decision-making, deployers must make it clear that users are subject to the use of AI.
Training:
Deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.
Fundamental Rights Impact Assessments:
Certain activities by deployers mandate a fundamental rights impact assessment be undertaken prior to deployment. The assessment must include:
- A description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose;
- A description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used;
- The categories of natural persons and groups likely to be affected by its use in the specific context;
- The specific risks of harm likely to have an impact on the categories of natural persons or groups of persons identified pursuant to the point above in this paragraph, taking into account the information given by the provider as part of their transparency obligations;
- A description of the implementation of human oversight measures, according to the instructions for use; and
- The measures to be taken in the case of the materialization of those risks, including the arrangements for internal governance and complaint mechanism
The act directs the AI Office to create a template assessment to facilitate this obligation. Upon completion, deployers must notify the market surveillance authority of the results and the completed template.
The EU AI Act exempts:
- Member states’ national security and military
- Public entities of third countries for international cooperation with appropriate safeguards
- Scientific research
- Beta systems during testing, unless real-world testing
- Purely personal uses
- Free, open-source systems that are not high-risk
The Act has a rolling enforcement schedule, as follows:
- The ban of AI systems posing unacceptable risks will apply In November 2024
- Codes of practice will apply February 2025
- Rules on general-purpose AI systems that need to comply with transparency requirements will apply in May 2025
- Obligations for high-risk systems become applicable in May 2027
- The remainder of the act goes into effect May 2026
Data Privacy is Just Good Business
Managing privacy compliance with all these new state privacy laws popping up in the U.S., might seem like a daunting task. But just because the task appears daunting doesn’t mean that it’s impossible to handle.
You don’t have to go at it alone! With the right support, you can make data privacy measures a sustainable part of your daily operations. That’s where Red Clover Advisors comes in – to deliver practical, actionable, business-friendly privacy strategies to help you achieve data privacy compliance and establish yourself as a consumer-friendly privacy champion that customers will appreciate.