I was sitting in a QBR with a SaaS company about three years ago. The team had run 22 experiments that quarter. Seventeen of them were button colour changes, headline tweaks, and layout shuffles on the pricing page. Five were inconclusive. None had moved revenue. The head of growth opened the meeting by saying they had “a very active CRO programme.” He was right. It just wasn’t working.

That’s the thing about conversion rate optimisation. It’s easy to look busy. It’s hard to actually move the number.

If you’re trying to figure out how to increase conversion rate and you’re not getting results, the problem usually isn’t your process. It’s what you’re pointing that process at.

Most Teams Are Optimising When They Should Be Diagnosing

Here’s what I see constantly. A team spots a low conversion rate on their free trial signup page. They immediately start brainstorming experiment ideas. Different CTA copy. Shorter form. Social proof. They pick one, set it live, wait three weeks, get a flat result and move to the next idea.

Nobody stopped to ask why people weren’t converting in the first place.

Diagnosis has to come before experimentation. Always. You need to know whether you have a traffic problem, a message problem, a friction problem or a trust problem before you start changing things. Running experiments without that is just pure guesswork.

Pull your session recordings. Read your support tickets. Talk to five customers who signed up last month and five who didn’t. Spend a week doing that before you open your testing tool. The thing that’s killing your conversion rate is almost always obvious once you look at the right data. It’s rarely the button colour.

‘Copy’ Changes Outperform Layout Changes. Nobody Talks About This.

I’ve run hundreds of experiments across e-commerce, SaaS, and enterprise software. The tests that consistently produce the biggest lifts are not the redesigns. They’re the rewrites.

Change the headline on a pricing page from feature-led to outcome-led and you will routinely see 15 to 25 percent lifts. Rewrite your free trial CTA from “Start Free Trial” to something that addresses the specific fear a user has at that moment and the number moves. Restructure the copy on an onboarding email to lead with what the user gets rather than what they need to do, and retention improves.

This matters because most SaaS teams spend their time on UI experiments. New layouts. Sticky navs. Streamlined checkout flows. These things matter, but they’re rarely where the biggest conversion rate improvements are hiding. Your users are not confused by your layout. They’re unconvinced by your message.

If you want a practical starting point: find the moment in your funnel where drop-off is highest, read the copy on that page out loud, and ask yourself honestly whether it sounds like something a real person would say to another real person. If it doesn’t, fix that before you touch the design.

The Experiment Mindset Most People Are Missing

This is the part that took me years to internalise and I still see experienced CRO practitioners struggle with it.

The reason we run experiments is because we don’t know the answer. That sounds obvious. It isn’t. Most teams run experiments hoping to confirm an idea they’ve already fallen in love with. When the test loses, they call it a failed experiment. They’re wrong. A losing experiment isn’t a failure. It’s the whole point.

If your variant loses, you just found out that your assumption was incorrect. That’s valuable. It means you didn’t ship something that would have hurt your conversion rate. It means your model of how your users think just got more accurate. A team that runs ten losing experiments and learns something real from each one is doing better work than a team that declares every inconclusive test a win and ships it anyway.

The question to ask after every experiment, win or lose, is: what did we learn, and does this change what we do next? If you can’t answer that, the experiment wasn’t designed well enough in the first place.

Where SaaS Teams Specifically Go Wrong

SaaS conversion funnels have specific failure points that keep coming up. I’ll be direct about the ones I see most often.

The first is optimising the acquisition page while ignoring activation. You can double your free trial signups and still watch revenue flatline if the onboarding experience is broken. Conversion rate isn’t just about the moment someone clicks “sign up.” For SaaS, it runs all the way through to paid conversion, and sometimes through to renewal. If you’re only looking at top-of-funnel click-through, you’re measuring the wrong thing.

The second is treating all traffic as the same. A visitor coming from a targeted LinkedIn ad who already knows your product category behaves completely differently from someone who landed on your homepage from a generic search query. If you’re running experiments across blended traffic without segmenting your results, you’re averaging out the signal. A test can look flat overall while producing a strong lift for one segment and a meaningful drop for another. Split your analysis.

The third is moving too fast at too small a scale. I see early-stage SaaS teams with 2,000 monthly visitors running A/B tests. At that traffic level, you will not reach statistical significance on anything meaningful within a reasonable timeframe. Your energy is better spent on qualitative research, talking to users, improving your core message, and building the audience you need before you invest heavily in experimentation infrastructure.

The Practical Framework I Actually Use

When a client asks me how to increase their conversion rate, this is roughly how I approach it.

Start with a conversion audit. Map the full funnel from first touch to paid conversion. Identify where the drop-off is largest. Pull quantitative data first, then layer qualitative research on top of it. You’re looking for the gap between what users expect and what they experience.

Prioritise by impact and confidence. Not every problem you find is worth solving through experimentation. Some things you should just fix. A broken form field doesn’t need a test. A confusing pricing structure might. The question is whether you genuinely don’t know which version is better, because if you do know, just ship it.

Write a proper hypothesis before you build anything. Not “we think changing the CTA will increase signups.” Something like: “We believe that users on the pricing page are hesitant because they don’t understand what’s included in the free trial. Changing the CTA from ‘Start Free Trial’ to ‘Try free for 14 days, no card needed’ will reduce that hesitation and increase trial starts by at least 10 percent among new visitors.” That’s a testable hypothesis. It has a mechanism, a prediction and a measurable outcome.

Then run the experiment, document the result, and update your understanding of your users. Repeat.

One Thing Worth Being Honest About

Not every company needs CRO. I know that’s a strange thing for a CRO consultant to say. But if your conversion problem is actually a product problem, or a positioning problem, or a pricing problem, running experiments on your landing page is not going to fix it. I’ve worked with companies where the honest answer was that the product wasn’t ready for the market they were targeting, and no amount of headline testing was going to change that outcome.

Before you invest in a CRO programme, make sure you’ve correctly identified what the actual problem is. Sometimes the answer to “how do we increase our conversion rate” is not “run more experiments.” Sometimes it’s something else entirely.

Before You Run Your Next Experiment

If you’ve got an experiment idea you’re ready to test, ask yourself this:

  • is this actually worth running?
  • Is the hypothesis clear?
  • Is the expected impact meaningful?
  • Is there a real insight driving it, or is it just a gut feeling dressed up as a test idea?

I built the Experiment Validator specifically for this moment. Drop your experiment idea in and it’ll tell you whether the hypothesis is solid enough to be worth your time and your traffic (it’s free to use for now). A bad experiment is not neutral. It costs you time, it costs you sample size, and it costs you the opportunity to run something better.

Check whether your next experiment is worth running before you build it. It takes two minutes and it might save you three weeks.

Kyle Newsam

An optimizer by trade & lifestyle. Truly any experience or interaction becomes an experiment & something I can learn from. Currently, moving around the globe working from the coolest locations that the younger me could never have imagined.

Leave a Reply