3 min read

When Governance Becomes Pattern-Matching

When Governance Becomes Pattern-Matching

The EU AI Act is the most ambitious AI regulation in the world. Most organisations will implement it like ISO 9001.

That's not a criticism of the Act. It's an observation about how organisations absorb regulation.

The pattern-matching problem

Siqi Chen, CEO of Runway, published a piece in VentureBeat last week arguing that AI breaks every historical analogy. His core observation is simple: when something new appears, humans reach for the nearest comparison, as we are wired for pattern-matching. Dot-com. Electricity. Mobile. Even when the pattern doesn't fit. Even when the thing we're looking at has no precedent in our reference set.

Chen is talking about capital markets and valuations. But the same mechanism operates in governance.

When the EU legislated AI, they reached for the nearest regulatory framework they had: product safety, CE marking, quality management systems. That was the right instinct. You build with what you have, and what they built was genuinely ambitious, first-mover regulation for a technology that wasn't going to wait for perfect legislation. The AI Act contains novel elements that go well beyond QM: fundamental rights impact assessments, conformity assessments for high-risk systems, post-market monitoring obligations. Credit where it's due.

The problem isn't the regulation. The problem is what happens when it lands on an organisation's desk.

The absorption reflex

I've watched organisations absorb regulation before. There are really only two modes.

The first is transformation. The regulation changes how you operate. New processes, new thinking, new capabilities. The regulation becomes part of how the organisation works.

The second is absorption. You build a compliance layer around unchanged operations. The regulation gets a department, a set of documents, and an audit schedule. Everything underneath continues as before.

GDPR was supposed to be mode one. For most organisations, it became mode two. A DPO was hired, often reporting to legal, occasionally to IT, rarely to anyone with operational authority. Cookie banners appeared. Privacy policies were updated. A records-of-processing-activities spreadsheet was created, populated once, and filed somewhere in SharePoint.

The regulation didn't fail. The implementation did. And the enforcement gap made the cost calculus straightforward: the probability-weighted cost of a fine was lower than the cost of genuine transformation. Some fines were issued. Headlines were written. Most organisations continued as before.

Why AI governance is different, and why the same reflex is dangerous

AI systems aren't static products, which is where the QM analogy breaks down.

A quality management system assumes the thing being managed is knowable and stable. You define the process, you certify the process, you audit the process annually. The product coming off the line tomorrow is essentially the same as the one today.

AI systems don't work that way, they compound through use and drift over time. A model's behaviour can change with new data, new context, or even new interactions, without anyone changing a line of code. A system you classified as limited risk six months ago might behave as high risk today, and nobody noticed because the risk register is a static document that gets reviewed once a year.

This is the category error Chen describes, applied one level down. Organisations are mapping a new governance challenge onto an old governance framework. Not because the old framework is the right tool, but because it's the one they know how to operate. Annual audits for a system that changes weekly. Static risk classifications for dynamic behaviour. Documentation written for the certifier rather than for the people actually operating the system.

The AI Act demands more than this. Post-market monitoring. Continuous risk management. Ongoing conformity. The regulation's architecture acknowledges that AI systems are dynamic. But the compliance culture receiving it was built for static products.

Governance can keep up

The AI Act is a strong foundation. The question is whether organisations use it as something to build on, or as a ceiling to certify against.

Governance that keeps up with AI looks different from what most organisations are used to. It's continuous, not annual. It's operational, not documentary. It monitors the system as it actually behaves, not as it was described in a conformity assessment twelve months ago.

The organisations that get this right will treat AI governance as an operational discipline, something that's embedded in how they build, deploy, and monitor AI systems. Not a layer on top. Not a department down the hall. Not a SharePoint folder.

The ones that don't will have compliant documentation and non-compliant systems. Same as GDPR. Same as QM. Some fines. Nobody changed.

The regulation isn't the bottleneck. The reflex is.


Sources

The Company Built to Prove Self-Regulation Works Just Proved It Doesn't

4 min read

The Company Built to Prove Self-Regulation Works Just Proved It Doesn't

Anthropic was founded in 2021 by researchers who left OpenAI over concerns that safety was being sacrificed for speed. Their Responsible Scaling...

Read More
Why Determinism Matters for AI Governance

2 min read

Why Determinism Matters for AI Governance

Ask a large language model the same question twice and it may not give the same answer. That might feel quirky when you’re debating pizza toppings...

Read More
AI Governance: Lessons from Two Decades of Data Mistakes

2 min read

AI Governance: Lessons from Two Decades of Data Mistakes

Twenty years ago, companies were racing to digitize customer data. CRM systems, analytics platforms and e-commerce exploded. Governance was an...

Read More