Intro
A funded founder reaches out. The app just launched. The investors want growth metrics. The temptation is to dump the seed budget into Meta and Google immediately and start posting weekly install numbers to the board chat.
That's the most expensive way to spend a seed round. Optimizing for scale on an app with no audience data, no install history, and no validated buyer profile means optimizing on noise. Every dollar spent before validation is a dollar that didn't buy learning.
The cleaner approach is phased. Validate first. Monetize second. Scale third. The phases overlap, but they don't collapse. Below is the framework, with the metric checkpoints that determine whether you're ready to move from one phase to the next.
Why the temptation to skip validation is so strong
Three pressures push funded founders into premature scale:
1. Investor metrics that reward inputs. A board chart that shows "installs growing 30% week over week" looks better than a chart that says "blended CPI stable at $1.20 with 18% trial start rate, validation continuing." The first is easy to read. The second is the metric that actually tells you whether the business will exist in 18 months.
2. The illusion of velocity. Spending budget feels like progress. A pre-revenue app burning seed money on Meta feels active. But activity isn't validation. Most cold-start campaigns burn through their first $10-20K acquiring users who don't trial, don't convert, and don't reveal anything about whether the product fits the market.
3. Platform optimization that rewards volume early. Meta and Google reward accounts that produce conversion volume. The temptation is to optimize toward install volume so the algorithm "learns faster." But what the algorithm learns on a cold app is what kind of cheap installs to find. Cheap installs that don't trial. Cheap installs that don't pay.
None of these pressures actually accelerate the business. They just accelerate the spend. The discipline that protects funded pre-revenue apps is to refuse to confuse the two.
Phase 1 — Validate the install economy
- Goal
- Determine whether the target audience exists, at what cost, and on which platform.
- Primary metric
- Blended CPI, segmented by platform (Android vs. iOS).
- Exit criteria
- CPI stable across two consecutive weeks, with at least 1,000 installs total and a clear platform split.
In phase 1, the campaigns are doing one job: finding the audience. Not monetizing it, not retargeting it, not nurturing it. Just answering the question: does this audience exist, and how expensive is it to reach?
The campaign structure:
- App install campaigns on Meta (primary), with secondary tests on TikTok or Apple Search Ads depending on category.
- Separate campaign structures for Android and iOS. This is non-negotiable.
- 2-3 creative variants per platform, rotating weekly.
- Audience targeting based on the best hypothesis available: lookalikes of best-fit customers if you have any, or behavior-based proxies if you don't.
- Optimization objective: install volume (not in-app events yet, the data isn't there).
Why Android and iOS need separate structures: iOS CPI is structurally 3-5x higher than Android due to Apple's privacy framework, narrower targeting, more competitive auction dynamics, and higher historical LTV. Running them under the same campaign with the same bid logic either overpays for Android (wasting budget on cheap installs that don't reflect iOS demand) or underdelivers on iOS (missing the audience that matters most for monetization). The split is operational basics for app launches.
Example CPI breakdown from Hair Try-On phase 1:
Android CPI: $0.42 (below category average for B2C apps) iOS CPI: $1.85 (near category average) Blended CPI: $0.84 Volume distribution: Android ~70% of installs, iOS ~30% (early-phase weighting)
The blended $0.84 is the headline. The platform-segmented numbers are the operating data. If we'd reported only the blended, the assumption would have been that the app was performing exceptionally on both platforms. The reality was more nuanced: Android was overperforming category norms, iOS was performing at category baseline. Each platform needed a different optimization strategy going into phase 2.
Read the full Hair Try-On case →
Phase 2 — Layer monetization on top
- Goal
- Determine whether the audience phase 1 found will convert into paying users.
- Primary metric
- Trial-to-paid conversion rate, by acquisition cohort.
- Exit criteria
- Trial-to-paid conversion stable across at least two cohorts, with cohort retention (D7, D30) tracking against LTV assumption.
Phase 2 starts once the install economy is stable. Now the campaigns split into two parallel engines:
Install campaigns (continue from phase 1): still finding new audience, still optimizing CPI. The job is the same, just with more accumulated audience signal.
Subscription/monetization campaigns (new in phase 2): targeting users who already installed but haven't trialed, or targeting lookalikes of users who completed trial in phase 1.
Pricing strategy is critical here. The Hair Try-On example: trial pricing was set at $6/month (an aggressive entry point) with a plan to scale price progressively toward $30/month after break-even and cohort retention validation. The discipline is "cheap in to validate, optimize price up once the floor is real." Charging full target price during phase 2 collapses trial volume and obscures the LTV signal you're trying to read.
What to measure in phase 2:
| Metric | Why it matters | What "good" looks like |
|---|---|---|
| Trial start rate per install cohort | Tells you if the audience cares enough to try | 10-25% for most categories |
| Trial-to-paid conversion | The unit economic | 20-40% for B2C subscription |
| D7 / D30 cohort retention | Predicts LTV | D7 retention above 30% |
| LTV by acquisition source | Tells you which campaigns to scale | At least 2 cohorts of data |
Don't move to phase 3 until two cohorts of trial-to-paid data agree with each other. Single-cohort signals are noise. Two cohorts that point the same direction are signal.
Phase 3 — Scale on proven unit economics
- Goal
- Compound growth with confidence that each acquired user delivers positive unit economics.
- Primary metric
- LTV/CAC ratio, by acquisition source.
- Exit criteria
- None. This is steady state.
Phase 3 only starts when phases 1 and 2 have produced stable unit economics. Specifically: blended CPI within tolerance, trial-to-paid conversion stable across cohorts, D7 and D30 retention tracking against the LTV assumption, and CAC payback period within the business's runway tolerance.
When phase 3 starts, the budget gets larger, the audience targeting gets broader, and the creative pipeline scales. But the discipline of measurement doesn't change. The same metrics that signaled "ready to scale" remain the daily operating dashboard. If any of them drift, budget pulls back, not forward.
Most apps that hit acquisition (or unicorn valuations, or sustained profitable growth) had a disciplined phase 3 built on a real phase 1 and phase 2. The exceptions are stories. The pattern is the playbook.
A note on marketplace apps (different shape)
If the app is a two-sided marketplace, the phase model still applies but the inside of phase 1 changes. Instead of validating "does the audience exist," phase 1 validates "which side of the marketplace is cheaper to acquire and what does catalog density need to look like before demand-side acquisition makes sense."
The Tiffins case is the canonical example: phase 1 acquired sellers (cheaper, faster), phase 2 layered buyer acquisition once catalog density was real, phase 3 scaled both sides in parallel. Two and a half years from three cofounders to 35-40 employees and acquisition by an investment fund.
Marketplace apps are the hardest category to launch. The phase discipline becomes more important, not less.
How Loocro runs phased launches
Every cold-start app engagement starts with the phase model in writing. Phase 1 budget allocation, exit criteria, primary metric. Phase 2 budget allocation, exit criteria, primary metric. Phase 3 conditions.
The weekly business review measures the current phase against its exit criteria, not against arbitrary growth targets. The conversation isn't "did we grow this week?" It's "are we still in phase 1, ready for phase 2, or ready for phase 3?"
That discipline protects the runway. Funded founders get a clear answer to whether the business is validating, monetizing, or scaling. Investors get honest reporting that tracks against the unit economics, not against vanity inputs. The team operates against criteria, not against pressure.
The 30-minute phased launch diagnostic
If you're already running paid media on a cold-start app and don't know which phase you should be in, run this:
- Pull CPI by platform (Android, iOS) for the last 4 weeks. If the numbers are not separated, you're flying blind on platform economics.
- Pull trial-start rate per install cohort for the last 4 weeks. If you don't have cohort-level data, the monetization layer doesn't have a clean signal yet.
- Pull trial-to-paid conversion for the oldest trial cohorts you have. Below 15%? The audience phase 1 is finding might not be the audience that pays.
- Calculate your CAC payback period based on current LTV assumption. Above 18 months for a B2C app with category-typical retention? Phase 3 scale will burn the runway before the math compounds.
- Ask yourself: am I in phase 1, phase 2, or phase 3? If you can't answer with one sentence, the discipline isn't installed yet.