Texas Responsible Artificial Intelligence Governance Act
The Texas Responsible Artificial Intelligence Governance Act (TRAIGA) was signed into law on June 22, 2025, and goes into effect Jan. 1, 2026. It regulates the deployment and development of AI systems provided in the state and creates a state AI Council and regulatory sandbox to support the ethical creation and use of AI as well as encourage innovation. TRAIGA prohibits certain uses of AI systems, requires transparency when consumers are interacting with AI, and provides the Attorney General with fining powers for violations after a 60-day cure period.
What you need to know about the Texas Responsible Artificial Intelligence Governance Act?
- Deployers: A person who deploys an AI system in the state.
- Developers: A person who develops an AI system that is provided in the state.
TRAIGA also has specific rules for the use of AI in government and healthcare agencies.
There are no specific AI technologies targeted by TRAIGA. See the “What Is Prohibited?” section for information on prohibited uses of AI technologies.
Any machine-based system that “infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”
TRAIGA does not apply to:
- Voiceprint data retained by a financial institution or its affiliates;
- Biometric info used for training, developing, or offering AI systems unless the system is for IDing a specific individual;
- The development or deployment of an AI model or system for the purpose of preventing, detecting or preventing against security threats, fraud or other illegal behavior;
- Preserving the integrity or security of a system;
- Investigating, reporting, or prosecuting individuals believed to be responsible for illegal activities.
Key Components of the Texas Responsible Artificial Intelligence Governance Act
The law prohibits the development or deployment of AI systems that are designed to:
- Encourage self-harm, harming others, or engaging in criminal activity;
- Violate or impact an individual’s constitutional rights;
- Discriminate based on protected classes (state and federal);
- Produce or distribute non-consensual pornography and deepfakes, or child pornography related content.
Government agencies may not deploy AI systems that:
- Evaluate individuals or groups of individuals for social scoring that may be unjustified or infringe on their rights.
- Identify individuals using biometrics or online images captured without consent where the collection would infringe on their rights.
No entities may develop or deploy AI systems with the purpose of:
- Infringing on or restricting constitutional rights;
- Discrimination against a protected class;
- Creating or distributing illegal sexually explicit content including child pornography and deep fakes;
- Engaging in text conversations of sexual nature while impersonating a minor.
- Government entities using AI to interact with consumers must disclose to consumers before or at the time of the interaction that the consumer is interacting with an AI system.
- Healthcare entities must provide notice prior to the individual or their representative no later than the date of service or treatment except in an emergency.
Disclosures must be clear and conspicuous, in plain language, and not use dark patterns.
TRAIGA does not have data minimization requirements; however, the Texas Consumer Data Protection Act includes obligations to minimize the collection and only use personal information for the purposes it was collected and as notified to the consumer.
Covered entities must follow the Texas Consumer Data Protection Act when using personal information in AI systems. No additional accountability mechanisms are required by TRAIGA.
TRAIGA establishes a regulatory sandbox program where developers can test innovative AI systems with a limited market and have legal protections. Developers must get approval from the Texas Department of Information Resources and other applicable agencies to enter into the sandbox program.
There is no specific obligation to conduct AI impact assessments; however, if the attorney general receives a complaint, they may request documentation that would generally be present in an assessment.
Additionally, the Texas Consumer Data Protection Act requires data protection assessments for processing personal information that represents heightened risks for consumers.
The Attorney General may request the production of a privacy assessment that includes the purpose of the system, data used to train the model, categories of data processed as inputs, description of the outputs, and more.
The Texas Attorney General enforces TRAIGA and must provide written notification of a violation and give companies 60 days to cure the violation.
Failure to cure or fall within a safe harbor may result in fines from $10,000 to $12,000 for curable violations and $80,000 to $200,000 for incurable violations. There is also a daily fee for continued violations of $2,000 to $40,000.
Additionally, once the AG has enforced the law, other state agencies may impose additional sanctions on licensed companies as recommended by the AG. These may include suspending or revoking licenses and fines of up to $100,000.
TRAIGA creates the Texas Artificial Intelligence Council, which will work to ensure AI systems are developed and deployed ethically and in the public’s best interests, as well as analyze the AI landscape, its efficiencies, and blockers to innovation.
It also creates an Artificial Intelligence Regulatory Sandbox Program designed to encourage and promote responsible use and innovation in AI systems.
Data Privacy is Just Good Business
Managing privacy compliance with all these new state privacy and AI laws popping up in the U.S., might seem like a daunting task. But just because the task appears daunting, it doesn’t mean that it’s impossible to handle.
You don’t have to go at it alone! With the right support, you can make data privacy measures a sustainable part of your daily operations. That’s where Red Clover Advisors comes in – to deliver practical, actionable, business-friendly privacy strategies to help you achieve data privacy compliance and establish yourself as a consumer-friendly privacy champion that customers will appreciate.