Fear of flying is pretty common, but flying today is remarkably safe, even with anxiety-inducing stories sometimes cropping up in the news.
However, that wasn’t always the case. In 1929, there were 51 fatal commercial airline accidents, about 1 for every million miles flown. At today’s flight volumes, that accident rate would produce roughly 7,000 fatal accidents each year.
In 1926, President Calvin Coolidge signed the Air Commerce Act to regulate pilots, certify aircraft, and investigate accidents. Yet accidents kept occurring through the early 1930s.
By the mid-1930s, though, the accident rate had dropped to about a tenth of its 1928-29 level. What led to the improved safety?
As investigations started examining the cause of failures, they began implementing processes that could catch problems, leading to safer flying. Airworthiness inspections grounded unsafe aircraft before they flew. Pilot licensing required demonstrated competency, not just paperwork. Airways with navigation aids—radio beacons and lighted routes—helped pilots avoid getting lost in bad weather.
These systems—not just the regulations—made flying safer.
AI governance is facing a similar moment in time. Some regulations are taking effect, and organizations are writing policies, but those aren’t the same thing as having systems in place that prevent harm.
Table of Contents
The AI regulatory landscape
The saying goes: we’re building the plane while flying it. It’s an uncomfortably on-point description of AI regulation up to this point. While AI, machine learning, and large language models have been around for a long time, ChatGPT only launched in late 2022, and regulatory bodies began scrambling to put guardrails in place.
The EU AI Act was passed in 2024. Enforcement of provisions applicable to prohibited AI practices began in February 2025, and enforcement of provisions related to high-risk systems will continue to roll out through August 2027. EU member states designated enforcement authorities on August 2 and activated penalties:
- Up to €35 million or 7% of global annual turnover for prohibited AI practices, whichever is higher
- Up to €15 million or 3% for violating obligations related to implementing responsible practices and business obligations
- Up to €7.5 million or 1% for supplying incorrect, incomplete, or misleading information to public authorities.
In the absence of federal action in the U.S., states introduced 260 AI measures across 47 states in 2025, with 22 becoming law:
- Comprehensive AI regulation
- Colorado’s AI Act (delayed effective date to June 30, 2026): Requires impact assessments for high-risk AI systems in employment, housing, credit, education, and healthcare
- Colorado’s AI Act (delayed effective date to June 30, 2026): Requires impact assessments for high-risk AI systems in employment, housing, credit, education, and healthcare
- State-specific laws
- Texas Responsible AI Governance Act (effective January 1, 2026): Focuses on discrimination and harm in AI systems
- New York RAISE Act: (Pending the governor’s signature)Targets frontier AI models with transparency and risk safeguards
- Connecticut SB 1295: (Effective July 1, 2025)Amends existing data privacy law provisions on automated decision-making
- California sector-specific laws
- Employment discrimination regulations (Effective October 1, 2025): Makes it unlawful to use automated decision systems for discriminatory purposes, requires bias testing, and four-year record retention
- SB 243 (companion chatbot law) (Effective January 1, 2026): Creates a private right of action up to $1,000 per violation for chatbot harm, requires disclosure that chatbots aren’t human
- SB 53 (Transparency in Frontier AI Act) (Effective January 1, 2026): Requires developers to post information about data used to train generative AI systems
But regulations don’t equal governance
Despite the growth in AI regulations, AI governance—the policies, processes, and oversight that structure how organizations use artificial intelligence—isn’t something you implement only when laws require it.
When:
- AI systems get deployed without bias testing, or
- Employees use unauthorized tools that leak proprietary data, or
- Vendors add AI features that change how your data gets processed…
…businesses face operational risks that exist regardless of legal requirements. A biased hiring algorithm can damage your brand before you even know bias exists in the model. Unauthorized AI tools can expose customer data to third-party training datasets, creating privacy violations that go virtually unseen until a breach surfaces.
Another way to put it is this: organizations need governance frameworks to manage these risks whether regulations exist or not.
That being said, AI governance activity has grown. The IAPP and Credo AI surveyed 670 organizations across 45 countries and found 77% were actively working on AI governance (almost 90% for companies already using AI), with 47% calling it a top-five strategic priority.
But activity doesn’t equal readiness. As organizations built AI governance programs through 2025, three lessons emerged: policies alone don’t prevent incidents without operational systems to enforce them; unclear lines of responsibility across teams mean no one owns decisions when problems occur; and traditional vendor oversight can’t track ever-shifting third-party AI capabilities.
Lesson #1: Policies don’t stop incidents, systems do
Having an AI governance policy doesn’t mean you have the tools or processes in place to prevent problems.
IBM’s 2025 Data Breach Report underscored this reality. Among organizations that reported AI model or application breaches (13% of those surveyed), 97% lacked AI access controls. Nearly two-thirds had no governance policies at all. Even among organizations with policies, only 34% of them performed regular audits to catch unsanctioned AI use.
But these systems require a structured approach. One widely adopted framework—NIST’s AI Risk Management Framework—breaks implementation into four functions that translate policy into operational practice:
- Govern: Assign clear ownership of AI risk decisions. Who has the authority to stop deployments when risk reviews flag problems? Document what level of risk requires executive sign-off versus team-level decisions.
- Map: Catalog which AI systems are running and what data they access, from hiring tools that process applications, to chatbots handling customer queries. You can’t control risks you haven’t identified.
- Measure: Assess specific risks each system creates. Does your hiring tool have bias testing? Can your chatbot hallucinate responses that customers will rely on? Does your facial recognition system have accuracy thresholds before triggering human review?
- Manage: Implement technical controls that enforce policies. Network monitoring that blocks unauthorized use of AI tools. Access logging that flags unusual data queries. Output validation that catches hallucinated responses. Automated gates requiring human approval before high-risk decisions execute.
Without technical controls actively in use, AI governance policies are just documentation of intentions. Organizations heading into 2026 need systems that prevent violations before they cause damage, not policies that describe what should happen after the fact.
Lesson #2: Scattered responsibility means no responsibility
When AI governance is everyone’s job, it becomes no one’s job.
While the IAPP and Credo AI survey found widespread activity on AI governance,
Half of AI governance professionals were scattered across ethics, compliance, privacy, and legal teams, and reporting practices were fragmented:
- 23% reported to general counsel
- 17% to the CEO, 14% to the CIO
- Only 39% had established AI governance committees
- 98.5% said they needed more AI governance staff
On the other hand, when privacy teams led AI governance, 67% felt confident about EU AI Act compliance. (This pairing, as we’ve discussed in other blogs, makes sense, as many AI governance processes can borrow from privacy ones; plus, AI activities frequently depend on personal data.)
But whether it’s privacy, IT, or the C-suite, it matters less who leads privacy and more how that responsibility takes shape. Organizations can create accountability structures by:
- Decide who has authority to oversee AI usage. Whether it’s a Chief AI Officer, CPO, or CTO, one person must own final decisions. IBM’s 2025 study found organizations with dedicated AI leadership report approximately 10% higher return on AI spend.
- Define decision thresholds clearly. Document which risks require executive approval versus team-level decisions, so product teams don’t face uncertainty when problems surface.
- Establish a cross-functional AI governance committee that meets regularly. Privacy, legal, security, and IT all contribute expertise, but the committee needs a clear chair with decision-making authority.
- Document escalation paths. Product teams need to know exactly who to contact when risk reviews flag problems, how fast decisions get made, and who can stop deployment if needed.
Who Owns AI Risk in Your Organization?
When AI governance responsibility is fragmented across teams, risk decisions stall or never get made. We help organizations establish clear ownership, escalation paths, and operational oversight for AI systems.
Lesson #3: Board oversight built for yesterday’s risks misses today’s problems
Traditional due diligence assumes relatively stable vendor capabilities, but AI is throwing a wrench in that assumption. A vendor passing an audit in January might have deployed new AI features by March that fundamentally changed data access patterns and risk exposure.
Organizations heading into 2026 need three vendor risk capabilities that their current processes might not provide:
Inventory third-party AI use across your vendor ecosystem
Identify which vendors embed AI in their products, which use AI to deliver services, and what data each system accesses. Annual questionnaires don’t work when vendors add AI capabilities between reviews. Require vendors to disclose AI deployments before implementation and document what data their AI systems use.
Assess AI-specific risks that standard audits miss
AI risk assessments need to identify risks that traditional audits weren’t built to catch. Whether evaluating internal systems or third-party tools, assess: What data trains the models? Is bias testing conducted? How is authentication for AI integrations secured? What controls prevent unauthorized AI deployments?
Organizations need documentation to verify that these controls actually exist, including through bias testing reports, security audits for AI components, and incident response procedures for AI failures. For third-party vendors, require this evidence during contract negotiations.
Monitor vendor behavior continuously, not annually
Organizations relying on annual reviews wouldn’t have caught them until the next assessment cycle. Implement continuous monitoring that flags unusual data access patterns, excessive API queries, or integration behaviors that deviate from baseline. Verizon found that only one-third of organizations continuously monitor vendor relationships, despite 57% citing operational disruption as their primary third-party risk.
Organizations that treat AI vendor risk as a procurement checkbox rather than an ongoing operational requirement discovered in 2025 what happens when vendors change faster than oversight can track.
Flying blind? Or flying safe?
AI governance in 2026 faces the same choices that the airline industry faced in the 1930s. Organizations can document policies and hope for compliance, or they can build operational systems that prevent violations, assign clear ownership when decisions need to be made, and track third-party risks that change faster than annual reviews can catch.
Red Clover Advisors helps organizations translate AI governance policies into operational practice. Learn more about AI governance with these helpful resources:
- AI Governance Roadmap: Business Guide
- The Ultimate Privacy & AI Sketchbook: Everything You Need to Know
- The Complete Privacy Compliance Checklist for 2026
Schedule a consultation to assess where your governance gaps are and what systems you need in place before 2026’s regulatory environment arrives.
AI Governance Roadmap: Business Guide
Our AI Governance Roadmap guides you to success in developing an AI governance program.
