The Oregon Attorney General recently published guidance on the applicability of Oregon’s existing laws to the use of AI technologies, particularly the state’s Unlawful Trade Practices Act (UTPA) and Consumer Privacy Act (OCPA). The guidance underscores that businesses must comply with the transparency, accountability, and consumer protection obligations in these laws when deploying AI in consumer-facing or personal information-processing contexts.
In the guidance, the AG recognizes the enormous opportunity and benefits AI can bring to businesses but emphasizes that these must be balanced with the potential risks.
Unlawful Trade Practices Act
The UTPA was enacted to prevent misrepresentations in consumer transactions — including misrepresentation by omission. The AG calls attention to the following UTPA considerations when developing or deploying AI tools.
Affirmative Duty to Disclose: Businesses must disclose if their AI systems are frequently wrong or misleading.
- Real World Example: A common use for consumer-facing AI is automated chatbots for customer support. Businesses should present disclosures to consumers letting them know they are engaging with AI chatbots.
- Operational Impact: Companies using AI systems should regularly audit them for accuracy and bias. Additionally, disclosure of the use of AI systems and that they may be inaccurate is recommended as a low-cost way to reduce risk.
AI Transparency: AI systems must not falsely claim to be human.
- Real World Example: If a chatbot uses a human-sounding name (e.g., “Eric”), it must either be operated by a human or disclose that it is nonhuman.
- Operational Impact: Ensure you have proper notifications when consumers are interacting with AI systems or tools.
Deceptive advertising:
- AI cannot be used to:
- Generate fake reviews or celebrity endorsements.
- Create misleading advertising, such as “limited-time” or “flash sale” offers that are not genuinely time-sensitive.
- Produce misleading AI-generated voices in robocalls, including false claims about the caller’s identity or purpose.
- Real World Example: An online retailer uses AI to generate product reviews and other marketing content.
- Operational Impacts: Conduct reasonableness audits of AI systems to ensure uses align with consumer expectations. Ensure AI systems do not create false celebrity endorsements, advertise fake “limited-time” sales, or produce AI-generated robocalls misrepresenting a caller’s identity or purpose. Monitor AI systems involved in advertising to avoid engaging in these deceptive practices.
Dynamic Pricing:
- While AI can be used for dynamic pricing, it cannot result in unconscionably high prices during a declared emergency.
- Real World Example: A grocery delivery platform uses AI for dynamic pricing but must cap dynamic delivery fees at reasonable levels during a hurricane emergency to avoid unconscionably high prices.
- Operational Impact: The use of dynamic pricing carries risks of deception beyond emergencies and should be carefully assessed whenever implemented. Put in place policies and technological protections to prevent large pricing jumps.
Other forms of Deception:
- AI’s ability to convincingly mimic human communication makes it a heightened risk for deceptive practices. Businesses must avoid using AI to mislead consumers.
- Real World Example: Consider how AI can be misleading to consumers, such as in chat-based interactions or in the language used in advertising.
- Operational Impact: Whenever developing or deploying AI, consider the risks of harm to end users, including errors, mistakes, deception and profiling. Businesses should put in place an AI governance program to ensure consistent application of policies.
AI Governance Roadmap: Business Guide
Our AI Governance Roadmap guides you to success in developing an AI governance program.
Oregon Consumer Privacy Act
The OCPA applies to any and all uses of personal information (PI) by covered entities, including when it is used in AI systems. This impacts businesses in a variety of ways, particularly when training and deploying AI models. The AG underscores the importance of transparency, consent, and accountability when using PI in AI systems.
DPAs are Required When Processing PI for AI:
- The AG considers the use of consumer PI in AI systems to carry a heightened risk of harm. Therefore, businesses feeding and processing PI into AI models must conduct comprehensive Data Protection Assessments (DPAs) to evaluate and mitigate potential risks.
- Real World Example: An online retailer uses consumer PI train an AI model for targeted advertising is considered to represent a heightened risk by the AG.
- Operational Impact: Conduct a DPA whenever consumer PI is fed into and processed as part of an AI system. Explore the possibilities of using de-identified information in AI systems to achieve your desired results.
AI Developers as Controllers:
- Developers of AI systems who use or purchase PI from other companies for AI training may be classified as Controllers, which subjects them to strict legal obligations.
- Real World Example: A marketing analytics company that purchases customer PI from an e-commerce platform to train an AI system for personalized ad targeting might be considered a Controller.
- Operational Impact: Review sources of personal information to determine whether your use of PI to train AI systems and models is as a controller or processor. Ensure contracts are in place for accountability and compliance.
Consent and Transparency Requirements:
- Businesses are required to accurately communicate their data practices, usually via Privacy Notices. Deployers and developers of AI systems must clearly and accurately disclose the use of PI in AI systems. Misleading consumers about how their PI is used or shared, even indirectly through third parties, could potentially constitute a violation.
- Additionally, where businesses intend to use sensitive PI or PI previously collected in AI systems, they must first obtain consent from consumers prior to this usage.
- Consumers must have the ability to withdraw previously given consent. Once consent is revoked, the company must stop processing the relevant data within 15 days of receiving the withdrawal request.
- Real World Example: A social media platform collects users’ religious preferences as part of their profiles. It now intends to use this data to train an AI model for creating tailored community recommendations.
- Operational Impact: Ensure your notice is accurate as to your uses of PI in AI — including any notices provided by third parties. Review and maintain records of privacy notices and consent language to ensure you have appropriate consent to use of PI in AI systems. Establish and maintain consent withdrawal mechanisms that are as easy to use as providing consent in the first place.
Profiling and Opt-Out Rights:
- Consumers must be given the option to opt out of profiling that involves AI models used for decisions with legal or otherwise significant impacts. This includes decisions related to housing, education, lending, or other similarly important matters.
- Real World Example: An insurance company uses an AI model to assess policy premiums based on a customer’s driving habits collected through a telematics device. To comply with legal requirements, the company must provide customers with the option to opt out of AI-based profiling, ensuring those who opt out are evaluated using traditional, non-AI underwriting methods.
- Operational Impacts: Establish a test to determine whether profiling will result in significant effects to an individual. Ensure proper notification and opt-out options are provided when using AI to make decisions that fall within this category. For instance, if you use AI to profile and target consumers with financial offers, consider if this may be an area of significant impact on the consumer.
Data Deletion Rights
- Consumers right to request the deletion of their PI must be honored in the context of AI models. Specifically, the guidance notes that PI used to train AI models must also be deleted upon request.
- Real World Example: A travel booking app uses customer PI to train an AI model to provide personalized destination recommendations and receives a request to delete personal information from a consumer.
- Operational Impact:When a customer exercises their right to request deletion of their PI, the business must ensure it removes the data from its databases and also that the AI model is retrained or adjusted to exclude the customer’s PI from its training set.
Translating privacy laws, one dialect at a time
Need a translator to navigate data privacy laws? Contact Red Clover Advisors to discuss your data privacy needs.