Texas Responsible Artificial Intelligence Governance Act

What you need to know about the Texas Responsible Artificial Intelligence Governance Act?

To What Entities Does TRAIGA Apply?
  • Deployers: A person who deploys an AI system in the state.
  • Developers: A person who develops an AI system that is provided in the state.

TRAIGA also has specific rules for the use of AI in government and healthcare agencies.

To What Technologies Does TRAIGA Apply?

There are no specific AI technologies targeted by TRAIGA. See the “What Is Prohibited?” section for information on prohibited uses of AI technologies.

What Constitutes AI under TRAIGA?

Any machine-based system that “infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”

When Does TRAIGA NOT Apply?

TRAIGA does not apply to:

  • Voiceprint data retained by a financial institution or its affiliates;
  • Biometric info used for training, developing, or offering AI systems unless the system is for IDing a specific individual;
  • The development or deployment of an AI model or system for the purpose of preventing, detecting or preventing against security threats, fraud or other illegal behavior;
  • Preserving the integrity or security of a system;
  • Investigating, reporting, or prosecuting individuals believed to be responsible for illegal activities.

Key Components of the Texas Responsible Artificial Intelligence Governance Act

What Is Prohibited?

The law prohibits the development or deployment of AI systems that are designed to:

  • Encourage self-harm, harming others, or engaging in criminal activity;
  • Violate or impact an individual’s constitutional rights;
  • Discriminate based on protected classes (state and federal);
  • Produce or distribute non-consensual pornography and deepfakes, or child pornography related content.

Government agencies may not deploy AI systems that:

  • Evaluate individuals or groups of individuals for social scoring that may be unjustified or infringe on their rights.
  • Identify individuals using biometrics or online images captured without consent where the collection would infringe on their rights.

No entities may develop or deploy AI systems with the purpose of:

  • Infringing on or restricting constitutional rights;
  • Discrimination against a protected class;
  • Creating or distributing illegal sexually explicit content including child pornography and deep fakes;
  • Engaging in text conversations of sexual nature while impersonating a minor.
Transparency Obligations
  • Government entities using AI to interact with consumers must disclose to consumers before or at the time of the interaction that the consumer is interacting with an AI system.
  • Healthcare entities must provide notice prior to the individual or their representative no later than the date of service or treatment except in an emergency.

Disclosures must be clear and conspicuous, in plain language, and not use dark patterns.

 

Minimization Obligations

TRAIGA does not have data minimization requirements; however, the Texas Consumer Data Protection Act includes obligations to minimize the collection and only use personal information for the purposes it was collected and as notified to the consumer.

Accountability Obligations

Covered entities must follow the Texas Consumer Data Protection Act when using personal information in AI systems. No additional accountability mechanisms are required by TRAIGA.

TRAIGA establishes a regulatory sandbox program where developers can test innovative AI systems with a limited market and have legal protections. Developers must get approval from the Texas Department of Information Resources and other applicable agencies to enter into the sandbox program.

AI Impact Assessments

There is no specific obligation to conduct AI impact assessments; however, if the attorney general receives a complaint, they may request documentation that would generally be present in an assessment.

Additionally, the Texas Consumer Data Protection Act requires data protection assessments for processing personal information that represents heightened risks for consumers.

Reporting Obligations

The Attorney General may request the production of a privacy assessment that includes the purpose of the system, data used to train the model, categories of data processed as inputs, description of the outputs, and more.

How Will TRAIGA Be Enforced?

The Texas Attorney General enforces TRAIGA and must provide written notification of a violation and give companies 60 days to cure the violation.

Failure to cure or fall within a safe harbor may result in fines from $10,000 to $12,000 for curable violations and $80,000 to $200,000 for incurable violations. There is also a daily fee for continued violations of $2,000 to $40,000.

Additionally, once the AG has enforced the law, other state agencies may impose additional sanctions on licensed companies as recommended by the AG. These may include suspending or revoking licenses and fines of up to $100,000.

TRAIGA creates the Texas Artificial Intelligence Council, which will work to ensure AI systems are developed and deployed ethically and in the public’s best interests, as well as analyze the AI landscape, its efficiencies, and blockers to innovation.

It also creates an Artificial Intelligence Regulatory Sandbox Program designed to encourage and promote responsible use and innovation in AI systems.

Data Privacy is Just Good Business