That’s the core issue with most conversion funnel optimisation work. Teams pick a page they care about and start testing. They skip the step that actually tells you where the money is being left on the table.
Start With the Funnel Map, Not the Test Plan
Before you run a single experiment, you need to know what your funnel actually looks like. Write it out. Every step a user touches from first landing to paid conversion. For a typical SaaS product, that might be:
- ad click
- landing page
- signup form
- email confirmation
- onboarding step one & step two
- feature activation
- upgrade prompt
- payment.
Most teams can sketch this in ten minutes. The problem is they rarely calculate the conversion rate at each individual step. They look at top-line numbers. “Our trial-to-paid rate is 8%.” Fine. But where exactly is the 92% going?
Pull the data step by step. If 1,000 people hit your landing page and 600 start the signup form but only 310 complete it, that’s a 48% drop on a single step. That’s your starting point. Not the pricing page. Not the checkout. The signup form.
This is funnel analysis. It’s not complicated. It’s just arithmetic most teams skip because they’re eager to get to the testing part.
The Multiplier Effect: Why Early Funnel Gains Hit Harder
Here’s something worth understanding properly. A 10% improvement at the top of your funnel compounds through every stage below it. A 10% improvement at the bottom only affects the people who made it that far.
Say 1,000 people visit your site. 200 sign up. 80 activate. 20 convert to paid. If you improve sign-up by 20%, you now have 240 signing up. That flows downstream. More activations. More conversions. You haven’t touched anything else.
Now flip it. You improve the final conversion step by 20%. You go from 20 to 24 paid users. Better than nothing. But the ceiling is much lower because you’re only working with the 80 people who activated.
This is why the biggest gains in a SaaS funnel are almost always in the first two or three steps. More people entering means more people everywhere. Teams obsess over the bottom of the funnel because that’s where the money is visible. But the leverage sits at the top.
The Drop-Off You’re Probably Ignoring
In SaaS specifically, there’s one transition that kills more revenue than most teams realise. It’s the gap between signup and first meaningful action. The moment someone creates an account and then just… doesn’t come back.
Onboarding is often treated as a product problem. “That’s the PM’s job.” But onboarding is a conversion problem. If someone signs up and never activates a core feature, you have not converted them. You’ve collected an email address. That’s not the same thing.
When I audited one SaaS client’s funnel, their signup-to-activation rate was 34%. Meaning 66 out of every 100 people who created an account never reached the moment where the product clicked for them. All their testing was happening after that wall. No wonder nothing moved.
Map your activation event. What does a user have to do for you to say “they get it now”? That’s usually connecting an integration, completing a first project, inviting a teammate, something concrete. Then measure how many signups actually reach that point. If it’s below 50%, that is your funnel optimisation priority. Full stop.
Writing Changes First, Layout Changes Second
Once you’ve found your highest drop-off point, most teams reach for the design tools. Move the button. Change the colour. Restructure the layout. Sometimes that’s right. More often it isn’t.
The reason people don’t complete your signup form, or bounce from your pricing page, or abandon onboarding is usually not because the button is in the wrong place. It’s because something they read made them uncertain. Or something they needed to know wasn’t there. Or the language felt like it was written for a different kind of person.
Copy changes are faster to run and historically outperform layout changes across the tests I’ve seen and run myself. Changing the headline on a signup form from “Create your account” to something that speaks to the outcome the user actually wants. Rewriting the microcopy next to a credit card field to address the exact fear a user has at that moment. Adding a single line of social proof directly above a friction point.
These aren’t glamorous changes. They don’t make good screenshots. But they work more often than a redesigned hero section, and almost nobody in the CRO space talks about them.
How to Prioritise When Everything Looks Broken
Once you map the funnel and calculate step-by-step drop-off rates, you’ll usually find three or four problem areas. You can’t fix everything at once. You need a prioritisation method that isn’t just gut feel.
The framework I use with clients has three inputs: volume, severity, and confidence. Volume is how many users hit this step. Severity is the percentage dropping off. Confidence is how sure you are you understand why they’re dropping off. You want high volume, high severity, and enough qualitative signal to form a real hypothesis.
That last part matters more than most people admit. Running an experiment without a hypothesis is just pressing buttons. A hypothesis looks like this: “We believe that users are abandoning the signup form at the password step because we’re showing password requirements only after they’ve entered something invalid, which creates friction and doubt. If we show requirements upfront, we expect completion rate to increase.”
That’s testable. That’s specific. You know what you’re changing, why, and what you expect to happen. Without that, your test results are hard to learn from even when they win.
Losses Are the Work, Not the Problem
One more thing worth saying plainly. Most of your experiments won’t produce a win. That’s not a sign the process is broken. That’s the process working.
You’re running experiments because you don’t know the answer. If you knew, you’d just implement. The test result, win or loss, tells you something real about your users that you didn’t know before. A loss that’s properly documented is worth more to your programme than a win you can’t explain.
I’ve seen teams stop experimenting after a run of losses. They decide CRO doesn’t work for them. In most of those cases, what wasn’t working was the diagnostic stage. They were testing on low-volume pages with no clear hypothesis and no qualitative data backing the idea. The testing wasn’t the failure. The setup was.
Fix the setup. Map the funnel. Find the highest drop-off. Form a real hypothesis. Then test. That order matters.
Before You Run Your Next Experiment
If you’ve got an experiment idea sitting in your backlog right now, the question isn’t whether it’s a good idea. The question is whether it’s the right idea at this moment, for this funnel, with the evidence you actually have.
The Experiment Validator is built to answer exactly that. Run your idea through it before you build anything. It’ll tell you whether your hypothesis holds up, whether you’ve got the evidence base to justify the test, and whether you’re focusing on the right part of the funnel in the first place. Takes a few minutes. Saves you weeks of running the wrong test.





