Contents
The Problem
I used to treat every failure as a surprise. Something would break in a session and I would fix it. Something would break in a project and I would patch it. Something would break across my entire portfolio and I would scramble. Each time felt like a new crisis. Each time I reacted instead of diagnosed. The cost was not just the fix itself -- it was the compounding damage that accumulated while I was still figuring out what went wrong.
The real problem was not that things broke. In any complex system -- especially one where I am routing decisions through AI at high velocity -- failure is structural. The AI's probabilistic output varies slightly each time. My attention fluctuates across a session. Instructions carry ambiguity. Context accumulates and subtly warps interpretation. Four individually normal variations compound into meaningful divergence. That is not a bug in the system. That is the system.
What kills execution is not corruption itself but the gap between when corruption starts and when I catch it. A misaligned AI instruction caught in the first minute costs me seconds. The same misalignment caught after fifty compounded instructions can cost me the entire session. A project drifting from its Target caught in week one costs a directional adjustment. The same drift caught in month three can require a complete teardown. The relationship between recognition delay and recovery cost is not linear -- it compounds. Every unit of undetected drift interacts with all the prior drift. I needed a diagnostic framework that made this visible at every scale.
What Spiral Anatomy Actually Is
Spiral Anatomy is a five-stage diagnostic pattern that describes how context corruption propagates and how graduated response contains it. The sequence is always the same: Corrupt, Drift, Recognize, Assess, Respond. It operates identically at task scale (a single AI session), project scale (accumulated architectural decisions), and ecosystem scale (portfolio-level strategic direction). The pattern does not prevent failure -- it makes failure legible so I can match the right response to the right severity.
What it provides:
- A universal diagnostic sequence -- the same five stages apply whether I am fixing a single session or restructuring an entire business model
- Recognition timing as a trainable skill -- once I named the pattern, I learned to catch drift earlier with each successive spiral
What it does not provide:
- Failure prevention -- corruption is structural in complex systems and will occur regardless of operator skill
- Automatic detection -- recognition still depends on my Environmental Control; the framework names the moment but does not trigger it for me
The framework's core insight is that the critical variable is Stage 3: Recognize. Stages 1 and 2 are inevitable. Stages 4 and 5 have clear protocols. The entire game is compressing the time between corruption starting and me catching it.
The Five-Stage Sequence at Every Scale
The five stages manifest differently depending on scale, but the structure is identical.
Stage 1: Corrupt. Context becomes contaminated. The AI introduces an error. I make a wrong assumption. A decision is built on incomplete information. At this point the corruption exists but has not propagated.
Stage 2: Drift. Corruption compounds undetected. Each subsequent decision builds on the contaminated context. No single decision looks wrong from the inside -- each one appears reasonable from the corrupted perspective. But accumulated divergence grows with every decision. This is where normalization of deviance sets in: I progressively accept misaligned output because the deviation is gradual enough that each step looks like "close enough."
Stage 3: Recognize. I catch the drift. This is the moment that determines everything. Recognition can be triggered by clearly wrong output, a pattern of increasing corrections, a feeling of confusion or friction, the Governor detecting system degradation, or a deliberate Stop and Recap check. Earlier recognition means healthier execution. Later recognition means weaker Environmental Control.
Stage 4: Assess. I determine severity across two categories: repairable (context can be salvaged, corruption is contained and correctable) or poisoned (context must be destroyed, corruption has permeated too deeply for targeted correction). The assessment dictates the response level.
Stage 5: Respond. I deploy the graduated response that matches the assessed severity. Repairable at task scale gets Stop, Pause, Reset or Stop and Recap. Poisoned at task scale gets Stop. Run It Back. Repairable at project scale gets Realign. Poisoned at project scale gets Tear Down. The graduation prevents both under-reaction (patching what should be rebuilt) and over-reaction (rebuilding what could be patched).
At task scale, the entire spiral plays out in minutes to hours. At project scale, it takes days to weeks. At ecosystem scale, it takes weeks to months. The response cost scales accordingly -- which is exactly why earlier recognition at every scale is the highest-leverage skill I can develop.
What the Data Shows
The CEM validation portfolio -- 596,903 lines of code, 10 systems, ~2,246 deduplicated commits across four months -- produced 23 documented recovery events that map directly onto Spiral Anatomy's five-stage pattern.
| Scale | Events | Example | Key Metric |
|---|---|---|---|
| Task | 15 | Frustration-triggered session corrections (L1-L5) | Minutes to recover |
| Project | 3 | PRJ-03 strip-rebuild: 43.2% rework rate, 24 commits in 4.2 hours | Days to recover |
| Ecosystem | 4 | September detonation: margin collapsed from 70% to 0.99% | Months to recover |
The distribution itself validates the framework: task-level spirals are frequent and cheap, project-level spirals are infrequent and moderate, ecosystem-level spirals are rare and expensive.
The financial evidence is the most striking. PRJ-12 ran at a 106% payout ratio -- $691K in affiliate payouts against $652K in revenue. The business was paying more to generate revenue than the revenue itself. AFF-08 under PRJ-13 operated at a 94.6% payout ratio. Revenue climbed from $4K to $43K monthly between February and August 2025, masking a margin collapse from 70% to 11%. By September: $61,675 revenue, $61,061 in payouts, $615 gross profit, EBITDA loss of $8,332. That was the recognition moment -- months late, hundreds of thousands of dollars deep. The response was ecosystem-level Hard Reset: the 60-day infrastructure build that produced the entire CEM validation dataset, driving monthly operating cost from $8,367 to $0.
The portfolio also shows improving recognition timing across the validation period. Early projects in October show longer gaps between drift onset and correction. Later projects in January show tighter correction cycles. The framework is learnable: operators who understand the pattern recognize it faster.
How to Apply It
1. Name the Stage You Are In When something feels off in execution, run the five-stage checklist: Has context been corrupted? Is drift compounding? Am I in the recognition moment right now? Do I need to assess severity? What response matches? Naming the stage converts vague unease into actionable diagnosis. The vocabulary alone accelerates recognition because it gives the pattern a handle I can grab.
2. Train Recognition, Not Prevention Stop optimizing for zero failure -- it is impossible in complex systems. Instead, optimize for fast recognition. Build habits that surface drift early: periodic Stop and Recap checks even when things seem fine, attention to physical signals like frustration and confusion, Governor monitoring of system-level metrics like rework rate and velocity trends. The goal is compressing the gap between Stage 1 and Stage 3.
3. Assess Honestly: Repairable or Poisoned The most expensive mistake in recovery is misclassifying severity. Patching a poisoned context wastes time on fixes that will not hold. Tearing down a repairable context destroys salvageable progress. Ask the hard question: has the corruption permeated deeply enough that targeted correction cannot reach it? If yes, accept the loss and deploy the nuclear response. If no, deploy the light fix and move forward.
4. Match Response to Scale Use the graduated escalation chain. Task-scale corruption gets task-scale response: Stop, Pause, Reset for minor drift, Stop and Recap for bilateral divergence, Stop. Run It Back for poisoned sessions. Project-scale gets Realign or Tear Down. Ecosystem-scale gets Recalibrate or Hard Reset. The pattern is the same at every scale -- only the magnitude of the response changes. Never deploy an ecosystem-level response to a task-level problem, and never deploy a task-level fix to an ecosystem-level collapse.
References
- Vaughan, D. (1996). The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. University of Chicago Press.
- Keating, M.G. (2026). "Foundation." Stealth Labz CEM Papers. Read paper
- Keating, M.G. (2026). "Governor." Stealth Labz CEM Papers. Read paper
- Keating, M.G. (2026). "Stop, Pause, Reset." Stealth Labz CEM Papers. Read paper
- Keating, M.G. (2026). "Stop and Recap." Stealth Labz CEM Papers. Read paper
- Keating, M.G. (2026). "Stop, Run It Back." Stealth Labz CEM Papers. Read paper
- Keating, M.G. (2026). "Environmental Control." Stealth Labz CEM Papers. Read paper