Automated systems fail in predictable ways. This book documents recurring governance failure patterns that can be recognised from outside the system, without access to internal code, models, or documentation. It is written for readers who need to identify when accountability cannot operate in practice, not to diagnose intent or propose fixes.
The focus is recognition. Each chapter names a specific structural condition and shows how it becomes visible through user interaction, absence of evidence, irreversible outcomes, or non-functional safeguards. These failures are not errors or misuse. They persist even when systems operate as designed and arise from architecture, delegation of authority, retention practices, vendor boundaries, and incentive structures.
The tests described here are external-facing. They can be applied by users, regulators, auditors, journalists, and investigators through direct interaction with a system and examination of what can and cannot be demonstrated after the fact. No privileged access is required. Recognition is the endpoint. Where a condition is present, accountability cannot be made to function regardless of explanation or assurance.
This book documents fifteen structural failures that appear when AI-driven systems operate at scale.
These failures persist even when systems work exactly as designed. They arise from architecture, retention practices, contractual boundaries, delegation patterns and incentive structures; not from bugs, errors or misconduct.
What this book is not
- This is not a compliance guide
- It does not certify correctness or determine legality
- It does not prescribe architecture, safeguards, or remedies
- It does not interpret statutes or establish regulatory standards
The tests stop at recognition. They identify whether a structural condition exists.
What follows, enforcement, remedy, redesign or litigation belongs to other processes and other authorities. Who this book is for
- Senior technologists and governance professionals confronting systems that claim accountability but cannot demonstrate it
- Regulators documenting why evidence cannot be produced under scrutiny
- Legal practitioners reconstructing decisions that left no usable trail
- Journalists and researchers investigating automated power without internal access
- Anyone who recognises recurring failures but lacks precise language to name them
The structural reality
Governance depends on evidence that survives time, where required evidence cannot be produced oversight cannot operate as designed.
Some rules require not only correct behaviour, but demonstrable correctness, where demonstration fails evidential absence itself becomes exposure.
Recognition is not the end of analysis, it is the beginning of accountability that can withstand scrutiny.