Contents
The Problem
I discovered the hard way that AI context drift is invisible in any single exchange. I say "build the form." The AI builds something 95% aligned. I say "add validation." The AI validates against its 95%-correct understanding of the form -- now we are at 90%. By the twentieth instruction, alignment is at 50% or lower, and I cannot point to the moment it went wrong because no individual instruction was the problem. The accumulated divergence was.
My instinct was to correct harder. "No, I meant this." "Change that to this." Every correction got processed within the already-contaminated context. The AI adjusted the specific element I flagged but retained the broader misunderstanding. I was fixing symptoms while the disease kept compounding. It was like repeating instructions louder to someone who misunderstood the premise -- volume does not fix a framing problem.
The lighter tool in the CEM escalation chain -- Stop, Pause, Reset -- resets my own perspective, and that handles a lot. But it is unilateral. It resets me without addressing the AI's accumulated context. When the problem is that the AI's working model has drifted from my intent, resetting only my side leaves half the gap unaddressed. I needed something bilateral -- a mechanism that forced both participants to surface their understanding so I could see where they diverged.
What Stop and Recap Actually Is
Stop and Recap is a structured three-question protocol that forces the AI to externalize its current understanding so the operator can compare it against intent and diagnose exactly where context drifted. It is the medium-fix tactic in CEM's escalation chain: deployed after Stop, Pause, Reset proves insufficient, before Stop. Run It Back becomes necessary. The protocol preserves session work while correcting shared reality.
What it provides:
- Bilateral reality check -- both operator and AI surface their understanding, making the gap between them visible and diagnosable
- Session preservation -- corrects the direction without destroying the accumulated work, avoiding the cost of a full context reset
What it does not provide:
- A fix for poisoned context -- when the AI's understanding is fundamentally broken rather than drifted, recap diagnoses the corruption but cannot repair it; escalation to Stop. Run It Back is required
- Automatic drift detection -- the operator must recognize that context has drifted and initiate the protocol; Stop and Recap does not fire on its own
The three questions -- "What have we done so far?", "What is the current task?", and "What is the next immediate step?" -- map directly to state model, task model, and projection. Each answer reveals a specific layer of alignment or divergence. The gap diagnosis determines the correction: clarify state, re-state the task, provide explicit next-step direction, or escalate if all three layers have diverged.
The Three-Question Diagnostic
The protocol produces a structured comparison across three dimensions, and the gap pattern determines what to do next.
Question 1: "What have we done so far?" The AI recounts what it believes has been completed. I listen for accuracy, claimed completions I do not recognize, omissions of work I expected, and characterizations that reveal misunderstanding. This surfaces the state model -- what the AI thinks exists right now.
Question 2: "What is the current task?" The AI states what it believes it is working on. Divergence here is the primary diagnostic signal. If the AI thinks it is building X while I intend Y, every subsequent instruction will be interpreted through the wrong frame. This is task model divergence, the most common and most damaging type.
Question 3: "What is the next immediate step?" The AI proposes where execution should go. I evaluate whether that next step advances toward my locked Target or diverges further. This is projection -- the AI's forecast of where the work is headed.
The gap pattern tells me what level of fix is needed. State model gap only: clarify what actually exists. Task model gap: re-state the task with explicit scope. Projection gap: provide direction rather than letting the AI extrapolate. Multiple gaps across all three dimensions: the context is likely poisoned, and I escalate to Stop. Run It Back. When the recap reveals exactly where things diverged and shared reality re-establishes within minutes, the protocol worked. When the AI's understanding is fundamentally wrong -- not drifted but broken -- or multiple recaps produce the same confusion, that is the signal to stop preserving and start rebuilding.
What the Data Shows
Stop and Recap was validated through the production of ten software systems between October 2025 and February 2026. As an operator behavior, it does not leave direct git artifacts -- validation is inferential, drawn from patterns in the commit history and session structure.
| Metric | Value | What It Indicates |
|---|---|---|
| Total output | 596,903 lines of code | Scale across which context recovery operated |
| Systems shipped | 10 | Portfolio breadth requiring sustained context management |
| Duration | 4 months | Time horizon over which drift patterns compounded |
| Commits | 2,561 raw / ~2,246 deduplicated | Continuous execution with mid-session directional corrections |
| Product bug rate | 12.1% | Drift caught at session level before compounding |
The commit history shows directional corrections mid-session: commits within the same session where output shifts from one approach to an aligned approach without a full restart. That pattern is consistent with recap revealing divergence and enabling correction -- the work is preserved, the direction is corrected. Long execution sessions (12+ hours) maintained consistent commit quality throughout, suggesting periodic context recovery was occurring. Without it, quality would degrade as drift accumulated.
The CEM Recovery Events Evidence Log documents the escalation chain in action. On January 27, 2026, when an AI tool deleted a landing page for PRJ-02, the operator escalated through the full tactical chain. Stop, Pause, Reset deployed first. When clarity did not return, Stop and Recap deployed -- the AI enumerated five specific failures. The recap revealed context poisoned beyond repair, triggering escalation to Stop. Run It Back. Separately, the operator identified six problems with the PRJ-02 landing page through systematic assessment -- wrong story, equal treatment of projects, filler sections, buried offer -- exhibiting recap behavior at the project level that produced actionable diagnosis informing the rebuild.
How to Apply It
1. Recognize the Drift Signal Watch for the pattern: corrections producing new problems instead of fixing old ones. Output that is consistently off but not completely wrong. The feeling that you and the AI are talking past each other. If Stop, Pause, Reset did not resolve the misalignment, the drift is bilateral and you need both sides surfaced.
2. Deploy the Three-Question Protocol Stop execution and ask the AI: "What have we done so far? What is the current task? What is the next immediate step?" Do not paraphrase or soften. Force the AI to give a full accounting. Compare each answer against your own understanding and note where they diverge.
3. Diagnose the Gap Pattern Identify which models diverged: state, task, projection, or multiple. A state gap needs a factual correction about what exists. A task gap needs a clear re-statement of intent. A projection gap needs explicit next-step direction. If all three are diverged, do not try to repair -- escalate to a full context reset.
4. Bridge and Resume -- or Escalate If the gap is bridgeable, provide the correction at the right level and resume execution. Verify alignment on the very next output before building further. If the recap reveals fundamental corruption -- the AI's understanding is broken, not drifted -- escalate to Stop. Run It Back without attachment to sunk session investment. The diagnosed divergence pattern becomes a Foundation asset for recognizing similar drift faster in future sessions.
References
- Keating, M.G. (2026). "Foundation." Stealth Labz CEM Papers. Read paper
- Keating, M.G. (2026). "Target." Stealth Labz CEM Papers. Read paper
- Keating, M.G. (2026). "Stop, Pause, Reset." Stealth Labz CEM Papers. Read paper
- Keating, M.G. (2026). "Stop, Run It Back." Stealth Labz CEM Papers. Read paper
- Keating, M.G. (2026). "Environmental Control." Stealth Labz CEM Papers. Read paper