Contents
- There is a persistent intuition in software development that large systems require large planning horizons.
- The DORA (DevOps Research and Assessment) program, now housed within Google Cloud, has spent years measuring elite software delivery performance.
- Nested Cycles work because of three properties that compound at every scale.
- If your team is building large systems using large cycles --- quarterly planning, two-week sprints with large story points, phased rollouts measured in months --- the data suggests you are paying a compounding tax on batch size.
Published: February 17, 2026 | PRJ-02 Content Search Intent: Informational Keywords: small development cycles, iterative software development speed, incremental software architecture
The Setup
There is a persistent intuition in software development that large systems require large planning horizons. A 200,000-line platform must be architected upfront. A multi-year product roadmap needs a multi-month design phase. The bigger the target, the bigger the container you build to reach it. This intuition is wrong, and the data has been saying so for decades.
The conventional approach to building large systems starts with comprehensive requirements gathering, flows into architectural design, then moves through phased implementation. Each phase is large. Each handoff is expensive. Each delay in one phase cascades into every downstream phase. The result is a system that ships late, ships over budget, or ships with features that no longer match market reality because the market moved while the plan was being executed.
The failure mode is not bad planning. It is the wrong unit of work. When the cycle is large --- a two-week sprint, a quarterly release, a six-month phase --- the feedback loop is slow, the blast radius of errors is wide, and the cost of course correction is high. The system grows not through compounding progress but through accumulated bets, each one placed months before the outcome is known.
What the Data Shows
The DORA (DevOps Research and Assessment) program, now housed within Google Cloud, has spent years measuring elite software delivery performance. Their findings on batch size are unambiguous: smaller batch sizes correlate with higher deployment frequency, lower change failure rates, and faster recovery times. Teams that deploy in small increments outperform teams that deploy in large releases across every metric DORA tracks. The relationship is not linear --- it is compounding. Small batches produce fast feedback, fast feedback enables faster correction, faster correction produces higher quality, and higher quality enables even smaller batches with confidence.
The Toyota Production System (TPS) established these principles in manufacturing decades before software adopted them. Taiichi Ohno's single-piece flow concept --- producing one unit at a time rather than batching --- reduced inventory, exposed defects immediately, and dramatically increased throughput. The principle transfers directly to software: a single completed feature, tested and deployed, beats ten features "in progress" across a sprint. The work-in-progress is not value. The completed, deployed, functioning feature is value. TPS demonstrated that reducing batch size does not reduce throughput --- it increases it, because the waste hidden inside large batches (waiting, rework, overproduction) disappears.
The Accelerate research by Nicole Forsgren, Jez Humble, and Gene Kim formalized these observations for software specifically. Their data shows that high-performing teams work in small batches, integrate continuously, and deploy frequently. The counterintuitive finding: working in smaller units does not mean shipping less. It means shipping more, faster, with fewer defects. The constraint is not the amount of work per cycle --- it is the speed of the cycle itself. Faster cycles produce more total output than slower cycles, even when each individual cycle contains less work.
Within the CEM (Compounding Execution Method) framework, the mechanism that operationalizes small-cycle architecture is called Nested Cycles. Developed and validated by Michael George Keating across the production of 596,903 lines of code in 10 systems (October 2025 -- February 2026), Nested Cycles implement fractal time architecture: self-similar cycles that nest from 15-minute micro-cycles up through session-cycles (2--4 hours), daily-cycles, task-cycles (1--3 days), component-cycles (1--7 days), and project-cycles (1--4 weeks). Each cycle follows an identical rhythm --- build, clean, improve, document, complete, repeat --- regardless of its duration.
The CEM portfolio data demonstrates the pattern concretely. 2,561 commits distributed across nested temporal structures. MVP timelines compressed from weeks to days: PRJ-05 reached MVP in 4 days. PRJ-04 reached MVP in 5 days. PRJ-01 --- a 194,954-line enterprise platform with 135 database tables and 20 external integrations --- was built through thousands of micro-cycles that aggregated into session-cycles, task-cycles, and project-cycles over four months. The system was never "planned" as a 194,954-line monolith. It emerged from the compounding completion of small cycles, each one producing a deployable increment.
Output did not degrade over time --- it accelerated. Monthly daily commit averages progressed from 6.8 in October to 4.8 in November (learning phase) to 10.0 in December to 31.1 in January. The fractal cycle structure enabled sustained acceleration rather than the burnout pattern typically associated with high-output periods. Seventeen days showed 20+ hour commit spans --- not continuous work, but nested session-cycles distributed across extended periods, with breaks between sessions.
How It Works
Nested Cycles work because of three properties that compound at every scale. First, emergent duration: the cycle matches the work, not the other way around. A bug fix takes 15 minutes, so it gets a 15-minute micro-cycle. An integration takes two weeks, so it gets a two-week project-cycle. There is no forcing function that compresses three weeks of work into a two-week sprint or pads three days of work into one. Each unit of work completes in its natural timeframe.
Second, completion discipline: every cycle ends with something done. Not "in progress." Not "80% complete." Done. If the work cannot be completed within the cycle, the operator makes an explicit binary decision (via the Pendulum mechanism): continue into another cycle, or stash the work for later. No cycle ends with abandoned work floating in limbo. This eliminates the accumulation of half-done features that characterizes large-batch development and creates the illusion of progress without the reality of deliverables.
Third, fractal self-similarity: the same pattern at every scale means the operator does not need different frameworks for different granularities. The rhythm of a 15-minute micro-cycle is identical to the rhythm of a four-week project-cycle --- build, clean, improve, document, complete, repeat. This reduces cognitive overhead. The operator learns one pattern and applies it at every level. Context switching between scale levels is seamless because the structure is the same structure at every level.
The nesting hierarchy creates natural aggregation. Micro-cycles feed session-cycles. Session-cycles feed daily-cycles. Daily-cycles feed task-cycles. Each parent cycle does not need to be planned in advance --- it emerges from the completion of its children. The four-day MVP timeline for PRJ-05 was not planned as a four-day project. It was the natural aggregation of daily-cycles, each composed of session-cycles, each composed of micro-cycles. The timeline emerged from execution, not from a Gantt chart.
What This Means for Engineering Leaders
If your team is building large systems using large cycles --- quarterly planning, two-week sprints with large story points, phased rollouts measured in months --- the data suggests you are paying a compounding tax on batch size. Every week of delay between a decision and its deployment is a week during which that decision cannot generate feedback, cannot be corrected, and cannot compound into the next decision.
The operational shift is not to do less work. It is to complete work in smaller units, faster, with tighter feedback loops. The CEM portfolio data demonstrates the endpoint: 596,903 lines of production code across 10 systems in 4 months, built through thousands of nested cycles, each one complete on its own terms. The large system was not planned large. It was built small, cycle by cycle, and the scale emerged from the compounding of completed increments. The cycle is the unit of value. Make the cycles small. Make them complete. Let the system grow from what you finish, not from what you plan.
Related: How to Know When to Kill a Software Project | How to Document a Software Development Methodology in 12 Days
References
- Google Cloud DORA Team (2024). "State of DevOps Report." Batch size vs. deployment frequency and change failure rate data.
- Ohno, T. (1988). Toyota Production System: Beyond Large-Scale Production. Productivity Press. Single-piece flow principles.
- Forsgren, N., Humble, J. & Kim, G. (2018). Accelerate: The Science of Lean Software and DevOps. IT Revolution Press. Small batch sizes and throughput correlation.