The Risks of Overcomplicated AI: Lessons from KPMG's 100-Page TaxBot

KPMG recently unveiled a “TaxBot” powered by a 100-page prompt to generate draft tax advice in a single day. On paper, that looks impressive. In practice, it’s a design flaw:
- Hard to update when laws change,
-
Impossible to debug when outputs go wrong,
-
Un-traceable — no one can pinpoint which part of the prompt drives which answer,
-
Un-explainable — outputs can’t be justified to regulators or clients,
-
Risky — in a high-stakes domain where a single mistake can cost millions.
This case is not about one firm. It highlights a broader temptation: to solve complexity with more text rather than with better design.
And that’s exactly where the EU AI Act comes in. High-risk systems like tax advice will require:
-
Transparency → you must know how outputs are generated.
-
Explainability → you must justify results to regulators, auditors, and clients.
-
Governance → you must demonstrate controls, versioning, and oversight.
A monolithic 100-page prompt fails on every front. It is unmanageable by design.
So what does work? We see three key principles:
-
Modular design — smaller, auditable components instead of giant blocks of text.
-
Tractability — every output should be traceable back to a rule, dataset, or prompt section.
-
Human oversight — accountability must stay with professionals, not hidden inside a black box.
The lesson? Less is more. Effective AI isn’t built on endless prompt pages, but on structures that scale: governance, monitoring, and explainability.
Because in the real world, complexity doesn’t scale. It breaks.