Contents
- When you build software fast, things break.
- McConnell's Code Complete establishes industry baseline rework rates at 20-50% across software projects of varying complexity.
- The 3.7-3.9% cluster is not the result of extraordinary care or slower development.
The Setup
When you build software fast, things break. That is the accepted wisdom across the entire technology industry. Push a team to ship faster and defect rates climb. Push harder and you end up spending more time fixing bugs than building features. The speed-quality tradeoff is treated as an iron law.
The data supports this for most development environments. Rollbar's 2023 developer survey found that 26% of developers spend more than 50% of their time fixing bugs, and another 38% spend more than 25% of their time on bug fixes. A Stripe-commissioned study found that the average developer spends 17.3 hours per week on maintenance and technical debt. The industry standard for "acceptable" defect rates sits around 20%, and most teams run well above that.
When you add the complexity of building multiple products simultaneously -- different verticals, different requirements, different deployment targets -- the quality problem compounds. Each new product introduces new failure modes. Coordination overhead increases. The conventional response is to slow down, add more testing, and accept longer timelines. But that defeats the purpose of multi-product expansion.
What the Data Shows
McConnell's Code Complete establishes industry baseline rework rates at 20-50% across software projects of varying complexity. The average developer creates 70 bugs per 1,000 lines of code, with 15 bugs per 1,000 lines reaching customers (Coralogix, 2023). These benchmarks represent the state of the industry under standard development practices.
Inside the Stealth Labz portfolio, four insurance-vertical products -- PRJ-08, PRJ-09, PRJ-10, and PRJ-11 -- were built on a shared software architecture between October and November 2025. The rework rates:
- PRJ-10: 3.7% rework (6 of 163 commits)
- PRJ-08: 3.8% rework (6 of 156 commits)
- PRJ-09: 3.9% rework (6 of 154 commits)
- PRJ-11: 11.3% rework (8 of 71 commits)
The first three products hit nearly identical quality numbers -- a 3.7-3.9% cluster that is roughly one-fifth of the industry's "acceptable" floor of 20%. Zero reverts across all three. The fourth product (PRJ-11) had a higher rate at 11.3%, but that product spanned five sub-verticals with significantly more complexity -- and it still came in at half the industry average.
These were not slow, careful builds. PRJ-08 shipped in 24 active days. PRJ-09 in 23 days. PRJ-10 in 25 days. PRJ-11 in 11 days. The fastest product in the cluster had the lowest rework rate (PRJ-10 at 3.7% in 25 days). Speed and quality did not trade against each other. They moved together.
Across the broader portfolio, the trend holds. The overall portfolio maintained a 12.1% product defect rate while output velocity increased 4.6x over the build period. As of January 2026, the pattern is clear: the later the product in the build sequence, the faster it shipped and the cleaner it was.
For context, PRJ-10's net-new delivery rate was 96.3% -- meaning 96.3% of all commits were new feature work, not fixes or rework. The industry target for this ratio is 80% (the "80/20 rule"). These products exceeded that target by a wide margin while being built by an operator with zero prior software engineering experience.
How It Works
The 3.7-3.9% cluster is not the result of extraordinary care or slower development. It is the result of the shared architecture propagating its quality into every product built on top of it.
When the infrastructure layer -- authentication, database patterns, admin interfaces, deployment pipelines -- has been tested across multiple production deployments, it carries its quality history with it. A new product built on this foundation does not need to test authentication because authentication has already been proven across prior products. It does not need to debug deployment because the deployment pipeline already works.
This shifts the quality effort from "test and fix everything" to "test and fix only what is new." When 80% of a product is inherited from a proven foundation, only 20% needs fresh testing. The result is that rework concentrates in the product-specific layer and virtually disappears in the infrastructure layer.
The 3.7-3.9% rework cluster across PRJ-08, PRJ-09, and PRJ-10 is not coincidence. These three products share the closest foundation -- the same admin dashboard with 40+ versioned iterations, the same API routing patterns, the same deployment pipeline configuration. The infrastructure underneath them is identical. The quality is identical because the quality is inherited.
What This Means for Business Operators
Rework costs money. Every hour spent fixing a bug is an hour not spent building features. At industry-average rework rates of 20-50%, between one-fifth and one-half of your development budget goes to fixing mistakes rather than creating value. At 3.7%, that waste drops to less than one-twenty-fifth of the build effort.
For operators considering multi-product expansion, the quality question is usually the objection: "If we build fast across multiple products, quality will suffer." The data from this portfolio says the opposite. Quality improved as the shared architecture matured, because each product inherited a cleaner foundation than the product before it. The fourth product in the cluster had a lower rework rate than the first. Speed and quality compounded together instead of trading against each other. The investment in getting the shared architecture right is not just a speed play -- it is a quality play.
Related: How to Launch in 4 Verticals with 79% Lower Costs Using Shared Software Architecture | What Transfers Between Products When You Share Software Infrastructure (and What Doesn't)
References
- McConnell, S. Code Complete. Industry baseline rework rates of 20-50% across software projects.
- Rollbar (2023). "Developer Survey." Time spent on bug fixes as a percentage of total development effort.
- Stripe. "Developer Coefficient Study." Average developer hours per week on maintenance and technical debt.
- Coralogix (2023). "Bug Rate Analysis." Average bugs per 1,000 lines of code reaching customers.
- Keating, M.G. (2026). "Case Study: One Scaffold, Four Products." Stealth Labz. Read case study
- Keating, M.G. (2026). "Case Study: Quality at Speed." Stealth Labz. Read case study
- Keating, M.G. (2026). "The Compounding Execution Method: Complete Technical Documentation." Stealth Labz. Browse papers