Assumptions don't kill projects. Unquestioned ones do.

I’ve had the opportunity to review a large number of business validation projects over the past year — across different teams, industries, and levels of experience. Smart, motivated people. Weeks of work. Genuine ambition. And across nearly all of them, the same pattern kept appearing — one that I don’t think is specific to any particular group or context.

Teams were building before they knew what they were building for.

The Mistake That Keeps Showing Up

In product teams, in startups, in corporate innovation units, and in early-stage ventures — there is a deeply ingrained instinct to make something. To prototype. To launch. To show tangible progress.

That instinct is not wrong. But completing the work and actually using it are two different things.

What I kept seeing wasn’t a failure to build. In most of the projects I reviewed, teams had done exactly what was asked of them: they had created landing pages, built clickable prototypes using AI tools, and put together assets specifically designed for validation — five-second tests, smoke tests, the kind of lightweight experiments described in books like Testing Business Ideas. The infrastructure for learning was there.

And then almost nobody used it.

The landing pages existed, but weren’t shown to real potential customers. The prototypes were polished, but never put in front of someone unfamiliar with the idea. The assets that could have generated real signal sat unused — while the decks grew longer, the financial models more elaborate, and the assumptions underneath all of it remained completely untested.

The market size? A top-down estimate from a secondary source, unverified. The customer pain? Inferred from a few conversations, sometimes none at all. The willingness to pay? Assumed based on competitive pricing. The unit economics? A spreadsheet built on numbers with no footnotes.

The assets were real. The validation wasn’t.

Why This Keeps Happening

The most common explanation I hear is “we didn’t have time.” But I think the real reason runs deeper: people had convinced themselves that validation is complicated.

It isn’t.

Showing your landing page to your neighbour and asking them to tell you what they think it does — that’s validation. Sending a prototype to five people who don’t know anything about your idea and watching where they get confused — that’s validation. Asking someone who fits your target customer profile whether they currently spend any time or money trying to solve the problem you’re working on — that’s validation.

None of these require a methodology. None of them require budget. They require willingness: a genuine openness to hearing something that might change your direction.

The complication usually comes from the team, not the task. Because you’ve been working on the idea for weeks, it starts to feel fragile. Showing it to someone who might not “get it” feels risky. Hearing negative feedback feels like a threat. So instead of running a quick, uncomfortable experiment, teams do more work on the thing itself — more slides, more detail in the financial model, more polish on the prototype — and call that progress.

There’s also a subtler problem: most people conflate confidence with evidence. Domain expertise, pattern recognition, personal experience — these feel like knowledge. Sometimes they are. But until you’ve tested an assumption against the real world, it’s still an assumption. And assumptions have a way of surviving unchallenged all the way to launch, where they finally get disproven at the worst possible moment.

If you’re working on a problem that people genuinely care about, getting early feedback should never be difficult. The people who have that problem are everywhere. You don’t need a research firm to find them. You need to ask.

The Experiment Ladder — and Why the Bottom Rungs Matter Most

One of the frameworks I find most useful — both in product contexts and when working with early-stage teams — comes directly from Testing Business Ideas by David Bland and Alex Osterwalder. The core idea is that experiments exist on a ladder: the higher the rung, the more expensive and time-consuming it is to run.

An MVP sits near the top of that ladder. It requires design, engineering, infrastructure, and weeks or months of time. But there are many rungs below it — and the teams I worked with had already built assets that sat on those lower rungs. Landing pages. Clickable prototypes. The tools were ready.

What the lower rungs actually look like in practice:

A five-second test. Show someone your landing page for five seconds, then ask them what they think the product does and who it’s for. If they can’t tell you, your value proposition needs work — and you’ve learned that in five seconds, for free.

An unmoderated prototype walkthrough. Hand someone your clickable prototype and ask them to try to complete a task without any guidance from you. Watch where they hesitate, where they click the wrong thing, where they give up. You’ll learn more in twenty minutes than from weeks of internal debate.

A concierge test. Manually deliver the service yourself, before you’ve automated anything. Does the customer actually use it? Do they pay? Do they come back? This tells you whether the value is real before you’ve built the delivery mechanism.

A Wizard of Oz test. The front end looks like a real product, but the back end is human. You learn what customers actually need — and how they behave — before committing to what to build and how.

Pre-sales or letters of intent. If someone won’t commit even a small amount before you build, that’s important signal. It doesn’t mean the idea is dead, but it does mean the urgency you assumed may not exist.

Five customer conversations with a structured script. Not to pitch — to listen. What are they actually trying to solve? How are they solving it today? What would make them switch? Are they already spending money trying to fix this, or are they just vaguely aware the problem exists?

The discipline isn’t in knowing these rungs exist. It’s in actually using them — before moving up.

Know Your Customer Before You Know Your Market

A related issue that appears just as often: teams define their target customer too broadly, then build a financial model on top of that definition. The result is a market size that looks impressive but is essentially meaningless.

“Students.” “Blue-collar workers.” “Consultants.” These are categories, not customer segments. If you’re building a tool for marathon runners, saying “athletes” is too broad — tennis players and basketball players have entirely different needs, habits, and buying patterns. A vague customer definition has cascading consequences: your market sizing becomes questionable, your value proposition feels generic, and your go-to-market strategy has no real starting point.

A useful test: can you describe your ideal first customer so specifically that you could find ten of them this week and have a conversation with each one? If not, the definition is too broad.

Beyond definition, there’s the question of pain severity. A lot of early-stage pitches describe a problem that resonates with some people — but don’t answer the harder question: how intensely do they feel it, and are they already trying to solve it? There’s a meaningful difference between someone who is aware of a problem and someone who is actively spending time or money trying to fix it. The latter is your real market. The former is just a survey response.

Financial Rigour Is a Credibility Signal

Numbers without sources are not numbers. They’re hypotheses dressed up as facts.

This sounds obvious, but it’s one of the most common issues I encounter when reviewing business cases. Market size figures, unit economics, revenue projections, competitive benchmarks — presented with confidence, with no indication of where they came from. In the age of AI-generated content, unsourced data raises immediate questions about reliability. Any experienced reviewer will notice, and it undermines everything else in the document.

If you conducted primary research, describe the methodology and sample size. If you relied on secondary research, cite the sources. If you used AI tools to generate summaries or data points, verify the underlying sources independently — AI can hallucinate citations, and including a fabricated reference is worse than having none at all.

Equally important: make your calculations easy to follow. After weeks of working on your own numbers, everything may feel obvious. It isn’t, to an outside reader. A simple worked example — what does one customer pay, what value do they receive, what margin do you earn — makes the rest of a financial model immediately more credible and easier to interrogate.

Bottom-up market sizing deserves special mention here. Top-down estimates (“the global market is worth $X billion”) are useful for context, but they don’t tell you how you’ll actually acquire customers. Bottom-up sizing forces you to think through the real mechanics: how do you get from one customer to ten? From ten to a hundred? Each stage gate requires different strategies, resources, and capabilities. This exercise makes a growth plan tangible rather than aspirational.

Intellectual Honesty Is a Strength, Not a Weakness

This is the piece I find myself coming back to most often, because it applies far beyond business validation.

A pattern I see consistently — in early-stage decks, in corporate product roadmaps, and in consultant presentations — is what I’d call optimism stacking. Everything points in one direction. The market is enormous. The competition is negligible. The growth curve is a hockey stick. The unit economics are near-perfect. The CAC/LTV ratio is ideal.

To someone without much experience, that looks impressive. To someone who has seen a lot of these, it’s a red flag. Not because success is impossible, but because when every single variable in a model is set to best case, you’re not looking at a forecast — you’re looking at a wish. And wishes have a way of collapsing on contact with reality.

The strongest cases I’ve reviewed do the opposite. They clearly label what is known versus assumed. They show what would have to be true for the model to work, then explain how they plan to test those things. They acknowledge the risks and articulate how they plan to reduce them. They show realistic growth scenarios alongside the optimistic one.

That kind of thinking — structured, honest, rigorous — is what separates teams that build things that work from teams that build things that look good on paper. It’s also what builds trust with investors, clients, and collaborators. People with real experience trust founders and product leaders who understand their risks. They are immediately skeptical of those who pretend they have none.

A well-reasoned decision to not proceed — backed by evidence — is as valuable as a go decision. Knowing when to stop is as important as knowing when to build.

What This Looks Like in Practice

Whether you’re a product manager scoping a new feature, a founder deciding whether to build a prototype, or a team making a case for a new initiative — the same discipline applies.

Before committing resources, ask:

What are the two or three assumptions that, if wrong, would kill this? Write them down explicitly. If you can’t articulate them, you haven’t thought deeply enough about the downside.

What is the fastest and cheapest way to test each one? You don’t need an MVP to learn whether a problem is real. You need a conversation, a landing page, or a well-designed experiment.

What does a no-go decision look like? If you can’t define in advance what evidence would cause you to stop or pivot, you’re not validating — you’re confirming. That’s a different exercise, and a much less useful one.

Where are your numbers coming from? If a figure in your model has no source, treat it as a hypothesis, not a fact. Then go find the fact.

These questions are not complicated. But they require something that is harder than it sounds: a genuine willingness to be wrong early, so you can be right when it matters.

”But What About the Companies That Didn’t Do Any of This?”

This is the objection that always comes up, and it’s worth addressing directly.

Airbnb started because two designers couldn’t pay their rent and spotted a conference overloading San Francisco hotels. They threw three air mattresses on their living-room floor and built a simple blog. No validation framework. No assumptions matrix. No experiment ladder. Slack emerged from a failed video game — Stewart Butterfield’s team had built an internal communication tool while working on a game called Glitch, realised the game wasn’t viable, and pivoted to the tool. Instagram was a cluttered check-in app called Burbn before Kevin Systrom stripped everything out and kept only photos.

These stories look like lightning strikes. And they’ve become part of how we romanticise entrepreneurship — the accidental idea, the garage origin, the pivot that changed everything. If it worked for them without any structured process, why bother with one?

A few things are worth noting.

First, the process was there — it just wasn’t labelled. Airbnb’s founders didn’t do formal customer discovery, but they were the customers. They had the problem themselves, first-hand, in real time. They didn’t need to validate whether conference visitors wanted accommodation alternatives because they watched three strangers sleep on their floor and pay for it. That is validation — raw, accidental, but real. Steve Blank, who formalised much of what we now call Customer Development, has argued exactly this: the methodology isn’t invented wisdom, it’s the codification of what successful founders had always done intuitively. The process gives a name and a structure to behaviour that was already there in the best founding stories.

Second, these are the survivors we remember. For every Airbnb there are thousands of ventures that also skipped the validation step and simply disappeared. We don’t tell those stories. Survivorship bias makes the lightning-strike narrative feel more common than it is — and more repeatable than it is.

Third, the world has changed. In 2007, Chesky and Gebbia had almost no competition, a relatively unsaturated internet, and low barriers to getting early signal. Today, building even a basic product takes longer, costs more, and lands in a far more crowded market. The margin for error is smaller. The cost of a wrong assumption, compounded over months of development, is higher. A structured approach to validation isn’t a constraint on creativity — it’s a way of making sure the creativity is pointing in the right direction before you’ve spent everything on it.

The systematic process and the intuitive process aren’t opposites. They’re the same underlying behaviour: find a real problem, test whether your solution actually solves it, and be honest about what you’re learning along the way. The difference is that a structured approach makes this reproducible — and makes it possible to do it even when you don’t happen to be your own target customer sleeping on an air mattress in San Francisco.

Where to Go From Here

If this resonates and you want to go deeper, two books I come back to repeatedly are worth your time.

Testing Business Ideas by David J. Bland and Alexander Osterwalder (Strategyzer) is the most practical field guide I’ve found for the experiment side of this. It catalogues 44 different types of experiments, organised by the assumptions they test and the resources they require. It’s also genuinely pleasant to read — visual, well-structured, and built to be used in the middle of a project rather than read once and shelved.

Steve Blank’s The Four Steps to the Epiphany is the intellectual foundation for much of what the lean startup movement became. It’s less polished and more demanding — but if you want to understand why customer discovery matters before building anything, Blank’s thinking is where it starts. His central argument is simple and still underappreciated: there are no facts inside your building. Everything you believe about your customer, your market, and your product is a hypothesis until you’ve tested it outside.

The Broader Point

The reason these patterns appear across different teams, industries, and contexts is that they’re not really about business skills. They’re about how humans deal with uncertainty. We reward visible output over rigorous thinking. We celebrate builders over validators. We treat confidence as competence, and polish as proof.

The good news is that the habit is learnable. And once you’ve built it, it changes how you approach every project — not as a thing to be made, but as a hypothesis to be tested. The goal isn’t to produce a perfect plan. It’s to develop the intellectual discipline to know when to pursue and when to walk away — and to make that call based on evidence rather than optimism.

That discipline, more than any specific skill or tool, is what makes the difference.

Search

Type to search across all pages