Contents
The Problem
AI does not deliberately change values. It approximates. A metric of 596,903 becomes "approximately 597,000" in one session and "nearly 600,000" in the next. A date of October 7, 2025 becomes "early October 2025" and then "Fall 2025." A specification with five required fields becomes six when the AI helpfully adds one that seems logical. No single approximation is wrong enough to catch. But each one moves the working value further from the validated original, and the next session treats the approximated value as its starting point.
That is the anchoring problem inverted. Arbitrary initial values are supposed to distort judgment -- but in AI-native execution, the risk runs the other direction. Validated initial values get replaced by AI-generated approximations, and the approximations become the new anchors. I am now judging output against drifted values rather than against reality. The AI's output becomes self-referencing: it sounds correct because it is internally consistent, even when it has drifted from external truth.
The compounding effect makes it worse. Early in a project, almost everything is in motion -- specifications are drafts, metrics are estimates, scope is flexible. As execution progresses, values get validated through use, data, or research. The execution space should narrow. But that narrowing only happens if validated values are explicitly locked and enforced. Without deliberate anchoring, the AI treats a finalized database schema with the same flexibility as a working assumption. An approved price point gets adjusted because the AI recalculates. A confirmed launch date gets shifted because the AI resequences tasks. The validation is lost because nothing distinguished a decided value from an idea still in motion.
What Anchored Data Actually Is
Anchored Data is the fixed reference layer that every drift-detection mechanism in CEM requires. I identify values that have been validated through use, data, or research -- then I lock them through deliberate decision. Those locked values become the objective standard that all AI output gets measured against. Changes to anchors require conscious operator action, never incremental AI approximation.
What it provides:
- Drift detection through verification -- when the AI contradicts a locked value, the deviation is immediately visible as a specific contradiction rather than a vague feeling that something is off
- Progressive constraint accumulation -- each locked value narrows the execution space, so later sessions operate within an increasingly accurate model of reality rather than re-deriving previously settled decisions
What it does not provide:
- Automatic value locking -- the operator must deliberately decide which values are fixed; nothing in the system promotes working assumptions to anchors without conscious human judgment
- A substitute for judgment -- anchors that are locked too early (before adequate validation) create false constraints and rigidity where flexibility is still needed
An anchor must meet four criteria: validated through use, data, or research; relevant across sessions; capable of causing real harm if silently contradicted; and locked by explicit operator decision. Working assumptions, preferences that might shift, and exploratory AI output that has not been verified do not qualify. The distinction between "decided" and "probably decided" is the entire point.
Three Phases of Anchor Accumulation
Anchors do not appear all at once. They accumulate in three phases that progressively constrain the execution space.
Phase 1: Pre-execution anchoring. Before a cycle begins, I identify what is already known and lock it. Launch dates, price points, confirmed API specifications, signed contracts, verified metrics -- these are anchors before the first line of work. They get established in the session at the start. This is the shared reality the AI operates within.
Phase 2: Execution-generated anchoring. As work progresses, new values get resolved. A database schema gets finalized. A conversion rate gets validated. A brand voice gets approved. The moment a value transitions from exploring to decided, I lock it as an anchor. This is a deliberate act -- not passive. I recognize the transition and promote the value from working output to fixed reference.
Phase 3: Anchor accumulation. Each locked value constrains the execution space. Early in a project, the AI has wide latitude because few anchors exist. As execution progresses, anchors accumulate and the corridor narrows. The AI's output becomes increasingly constrained against objective reality. The system gets more precise as it moves forward, not less. This is Foundation's compounding expressed as constraint -- not just more work done, but more fixed points that prevent regression and drift.
In practice, anchors live within Foundation but serve a different function. Foundation fuels work. Anchors constrain it. At session start, I feed relevant anchors to establish shared reality. During execution, output gets checked against those anchors. When the AI contradicts one -- changed a date, rounded a metric, modified a specification -- the anchor makes the deviation visible immediately. And when an anchor legitimately needs to change, I stop, make the decision consciously, update the anchor, and propagate the change. The old value gets stashed through the Pendulum, the new value gets locked. That is deliberate change control, not incremental drift.
What the Data Shows
Anchored Data was validated across ten software systems totaling 596,903 lines of production code and 2,561 commits over four months (October 7, 2025 through February 2, 2026). The mechanism is observable through reference integrity maintenance and drift events caught when anchors were violated.
The CEM portfolio itself provides the clearest evidence. During formalization, I created LOCKED_VALUES.md -- a single document containing every validated metric, timeline, and classification. All derivative documents get checked against this anchor file. When a document contradicts LOCKED_VALUES.md, the document is wrong -- not the anchor. That file has been updated through deliberate change control (version 1.0 to 1.1), with each change traceable to a specific correction event.
The portfolio audit revealed exactly what happens without anchoring:
| Value | Drifted Appearances | Anchored Value | Source of Truth |
|---|---|---|---|
| Total commits | 2,434 / 2,182 | 2,561 | Git records |
| Rework rate | "<15%" / "25.8%" | 23.7% | Per-project calculation |
| Contractor spend | $82,521 | $65,054 | QuickBooks-verified |
| HOA sweep cost | $840 | $90 | Corrected in v1.1 |
Every drifted value was internally consistent within its own document. Every one looked reasonable. The anchored value exposed each as factually wrong. Without anchors, those drifted values would have propagated unchallenged -- because they sounded right.
The constraint accumulation effect is visible in MVP timelines. Early projects with few anchors took 14-21 days to reach MVP. Later projects, inheriting a growing constraint layer of locked schemas, validated API patterns, confirmed pricing models, and finalized branding guidelines, reached MVP in 4-5 days. Anchor density eliminated the re-derivation of previously settled decisions.
How to Apply It
1. Lock Values the Moment They Are Decided When a value transitions from exploring to decided -- validated through use, data, or research -- promote it from working assumption to fixed reference immediately. Do not let the transition happen implicitly. A value that drifts back to "flexible" after being validated is a value that was never locked. The act of locking must be deliberate and conscious.
2. Feed Anchors at Every Session Start The AI does not remember previous sessions' constraints. Every session starts from a blank context unless I establish the constraint layer. Relevant anchored values provided at session start prevent the AI from re-deriving or approximating values that have already been validated. This is not overhead -- it is the shared reality that makes the session productive.
3. Check Output Against Anchors, Not Gut Feel During execution, verify AI output against locked values. Did it change a date? Contradict a specification? Use old pricing? Round a metric differently? Anchors make deviations visible through comparison, not instinct. The difference between "that sounds about right" and "that contradicts the locked value" is the difference between guessing and verifying.
4. Investigate Contradictions, Not Just Correct Them When the AI contradicts an anchored value, the contradiction is a signal -- not just an error to fix. The AI did not randomly change a number. Something in the context led to the deviation. Understanding why the AI drifted reveals whether the context itself has degraded. The first point outside control limits triggers investigation that often reveals broader drift beyond the single contradicted value.
References
- Tversky, A. & Kahneman, D. (1974). "Judgment under Uncertainty: Heuristics and Biases." Science, 185(4157), 1124–1131.
- Shewhart, W.A. (1931). Economic Control of Quality of Manufactured Product. D. Van Nostrand Company.
- Keating, M.G. (2026). "Vision." Stealth Labz CEM Papers. Read paper
- Keating, M.G. (2026). "Foundation." Stealth Labz CEM Papers. Read paper
- Keating, M.G. (2026). "Pendulum." Stealth Labz CEM Papers. Read paper
- Keating, M.G. (2026). "Scaffold." Stealth Labz CEM Papers. Read paper
- Keating, M.G. (2026). "Target." Stealth Labz CEM Papers. Read paper
- Keating, M.G. (2026). "Environmental Control." Stealth Labz CEM Papers. Read paper
- Keating, M.G. (2026). "Drift Tax." Stealth Labz CEM Papers. Read paper