Article

38 Products Tested, 6 Scaled: How to Build a Test-and-Learn Product Machine

Multi-Vertical Scaling

Key Takeaways
  • The standard startup narrative goes like this: come up with one product idea, spend months building it, launch it, and hope it works.
  • According to a 2024 analysis by First Round Capital, the median number of product pivots before achieving product-market fit is 2-3, with some companies iterating through 5-7 versions.
  • The test-and-learn machine runs on two conditions.
  • Product-market fit is not a single discovery.

The Setup

The standard startup narrative goes like this: come up with one product idea, spend months building it, launch it, and hope it works. If it does not, you have burned through your runway and your options are limited. If it does, you scale it. Either way, you are making one bet with everything on the line.

The problem with single-bet strategies is that product-market fit is hard to predict in advance. CB Insights analyzed 101 startup failures and found that 42% cited "no market need" as the primary reason for failure -- meaning they built something nobody wanted. When your entire business depends on one product finding its audience, the odds are not in your favor.

The alternative is a portfolio approach: test many products, measure results fast, scale winners, kill losers. This sounds obvious in theory. In practice, it only works if the cost of testing each new product is low enough that you can afford to test dozens without going broke. If each test costs $30,000 and takes 3 months, you cannot test 38 products. You can barely test 3.

What the Data Shows

According to a 2024 analysis by First Round Capital, the median number of product pivots before achieving product-market fit is 2-3, with some companies iterating through 5-7 versions. Harvard Business School research on venture-backed startups found that companies that test multiple product concepts in parallel achieve product-market fit 2.5x faster than those that test sequentially.

Between October 2023 and January 2026, operator Michael George Keating launched 38 distinct product SKUs through Konnektive CRM, generating $1,075,946 in gross revenue. The results segmented into three clear tiers:

Tier 1 -- Scaled products (>$25,000 net revenue): 6 products

  • PRD-01: $509,821 net (peak month February 2024 at $173,000)
  • PRD-02: $128,794 net
  • PRD-03: $100,909 net
  • PRD-04: $71,792 net
  • PRD-06: $48,280 net
  • PRD-05: $28,574 net

Tier 1 total: $888,170 -- representing 94.5% of all revenue from 15.8% of all products tested.

Tier 2 -- Moderate traction ($1,000-$25,000 net): 9 products Including PRD-07 ($14,419), PRD-08 ($11,487), PRD-09 ($7,317), and six others. Tier 2 total: $47,456 (5.0% of revenue). Some of these launched in mid-2025 and may still be climbing.

Tier 3 -- Micro-tests (<$1,000 net): 23 products Twenty-three products that generated under $1,000 each. Average micro-test revenue: $56. Tier 3 total: $4,320 (0.5% of revenue).

The distribution follows a classic power law: 6 of 38 products drove 94.5% of net revenue. The 23 micro-tests are not failures -- they are the testing layer. Each one cost virtually nothing to deploy because it ran on the same shared infrastructure (same CRM, same payment processing, same fulfillment pipeline, same affiliate tracking). The signal -- "this is not working" -- was read quickly, and resources were redirected to winners.

Products launched in waves across the 28-month period. Wave 1 (October 2023) proved the infrastructure worked. Wave 2 (February 2024) hit peak revenue with multiple products running simultaneously. Waves 3-6 tested new verticals, killed what did not gain traction, and found late-stage breakouts like PRD-03, which launched in August 2025 and hit $100,909 net in five months.

How It Works

The test-and-learn machine runs on two conditions. First, the infrastructure cost of testing each new product must be near zero. All 38 products ran through the same Konnektive CRM, the same payment processing pipeline, the same affiliate tracking, the same fulfillment systems, and the same reporting and attribution tools. The marginal cost of adding a new product to this infrastructure was configuration time, not engineering time.

Second, the kill decision must be fast. The operator's micro-tests averaged $56 in total revenue each. That means the "this is not working" signal was read within 30-60 days and resources were reallocated. There is no $10,000-$15,000 product in the portfolio that limped along for months. Products either showed traction fast or were cut.

The lifecycle patterns across the portfolio show four distinct types:

  • Flash scale + cliff: PRD-01, PRD-02, PRD-04 -- high peak revenue, affiliate-dependent, rapid decline after peak
  • Steady grower: PRD-06, PRD-05 -- consistent monthly revenue without a single dramatic peak
  • Late-stage breakout: PRD-03 -- launched months after the first wave, broke out to six-figure net revenue
  • Micro-test to kill: Products generating $4-$56 total -- signal received, product killed, infrastructure redeployed

What This Means for Business Operators

Product-market fit is not a single discovery. It is a portfolio exercise. The operator who tests 38 products and scales 6 has a fundamentally different risk profile than the operator who builds one product and hopes. Six winners generating $888,170 in net revenue is not lucky. It is the expected outcome of running enough tests on infrastructure that makes each test nearly free.

The 23 micro-tests that generated $56 average each are the cost of finding those 6 winners. And because every test ran on shared infrastructure, the total cost of all 23 failures combined was negligible. For operators thinking about product strategy, the question is not "which product should I build?" It is "how many products can I test before I need a winner?" The answer depends entirely on what each test costs -- and shared infrastructure is what drives that cost toward zero.


Related: How to Enter a New Business Vertical in Days Instead of Months | The Cold-Start Problem in Multi-Product Businesses (and How Shared Infrastructure Solves It)

References

  1. CB Insights (2023). "Startup Failure Analysis." Primary causes of startup failure including lack of market need.
  2. First Round Capital (2024). "Product-Market Fit Study." Median product pivots and parallel testing impact on time to product-market fit.
  3. Harvard Business School. "Venture Research." Multi-concept testing velocity in venture-backed startups.
  4. Keating, M.G. (2026). "Case Study: The Product Launch Engine." Stealth Labz. Read case study
  5. Keating, M.G. (2026). "The Compounding Execution Method: Complete Technical Documentation." Stealth Labz. Browse papers