1 min read

The Risks of Overcomplicated AI: Lessons from KPMG's 100-Page TaxBot

The Risks of Overcomplicated AI: Lessons from KPMG's 100-Page TaxBot

KPMG recently unveiled a “TaxBot” powered by a 100-page prompt to generate draft tax advice in a single day. On paper, that looks impressive. In practice, it’s a design flaw:

  • Hard to update when laws change,
  • Impossible to debug when outputs go wrong,

  • Un-traceable — no one can pinpoint which part of the prompt drives which answer,

  • Un-explainable — outputs can’t be justified to regulators or clients,

  • Risky — in a high-stakes domain where a single mistake can cost millions.

This case is not about one firm. It highlights a broader temptation: to solve complexity with more text rather than with better design.

And that’s exactly where the EU AI Act comes in. High-risk systems like tax advice will require:

  • Transparency → you must know how outputs are generated.

  • Explainability → you must justify results to regulators, auditors, and clients.

  • Governance → you must demonstrate controls, versioning, and oversight.

A monolithic 100-page prompt fails on every front. It is unmanageable by design.

So what does work? We see three key principles:

  1. Modular design — smaller, auditable components instead of giant blocks of text.

  2. Tractability — every output should be traceable back to a rule, dataset, or prompt section.

  3. Human oversight — accountability must stay with professionals, not hidden inside a black box.

The lesson? Less is more. Effective AI isn’t built on endless prompt pages, but on structures that scale: governance, monitoring, and explainability.

Because in the real world, complexity doesn’t scale. It breaks.

AI Governance: Lessons from Two Decades of Data Mistakes

2 min read

AI Governance: Lessons from Two Decades of Data Mistakes

Twenty years ago, companies were racing to digitize customer data. CRM systems, analytics platforms and e-commerce exploded. Governance was an...

Read More
Final-Only “Human-in-the-Loop” Is a Liability

5 min read

Final-Only “Human-in-the-Loop” Is a Liability

We’ve spent months pondering what “human-in-the-loop” really means in the wild; actual production workflows that have customer impact. What I keep...

Read More
Deconstructing AI Failure Rates: Lessons from Historical Tech Adoption

5 min read

Deconstructing AI Failure Rates: Lessons from Historical Tech Adoption

The recent MIT-NANDA report, "The GenAI Divide: State of AI in Business 2025," has echoed loudly across boardrooms and tech forums, warning that a...

Read More