A few years into running experimentation programs for some of the biggest brands in the world, I sat in a meeting where someone proudly announced they’d “done CRO” on their checkout page. They’d changed the button from grey to green. Conversion went up 0.3%. Everyone cheered.
That moment stuck with me. Not because the result was wrong. But because nobody in the room questioned what CRO was actually supposed to answer. Nobody asked: are we solving the right problem? Are we even close to the thing that’s actually stopping people from buying?
This post is the answer I wish someone had given me earlier. Not the Wikipedia version. The one you get after 14 years running programs for Samsung, ASOS, Fitbit, Fujitsu and more.
The Actual Definition (Keep It Simple)
What is CRO? Conversion Rate Optimisation is the practice of systematically improving the percentage of visitors who take a desired action on your website or product.
That’s the CRO meaning most people agree on. Visitor arrives. Desired action exists… a purchase, a sign-up, a trial start. CRO is the structured process of understanding why people aren’t taking that action, forming hypotheses, running experiments and applying what you learn.
Simple enough. The problem is what people do with that definition.
What CRO Is Not
CRO is not [just] button colour testing. It’s not best-practice checklists. It’s not copying what your competitor did and calling it a hypothesis.
Most teams think CRO looks like this:
- Pick an element on the page
- Change it
- Run a test
- Declare a winner. And repeat.
That’s not optimisation. Real CRO starts before you touch a single element. It starts with a question: why are people not converting? And that question is harder than it looks. Because the honest answer… the one most teams won’t say out loud is “we don’t know”.
Here’s the thing about experimentation that gets buried under all the tooling and dashboards and roadmaps… the reason we run experiments is because we don’t know the answer. If we knew the answer, there’d be no need for an experiment. The moment a team starts running tests to confirm what they already believe, they’ve stopped doing CRO.
CRO is also not a campaign. It’s not a quarter-long project. It’s not something you “finish.” It’s a systematic, ongoing capability. One that gets sharper the more honest you are about what you don’t know.
Why Most CRO Programs Fail Before They Start
The failure usually isn’t in the tools. It’s not even in the team. It’s in the framing.
Most companies hire for CRO when they want growth. Those are different problems. Optimisation removes friction from an existing journey. Growth expands who takes that journey in the first place. If your funnel is fundamentally broken (wrong audience, wrong message, wrong product-market fit) no amount of A/B testing will save you.
This is the uncomfortable truth I’ve had to tell clients more than once. You don’t have a conversion problem. It’s really more of a positioning problem. Or a traffic quality problem. Or a product problem. CRO can’t fix those. It can only reveal them, if you’re paying attention.
The other failure mode is velocity theatre. Teams celebrate shipping tests. They track how many experiments ran per quarter. But are they really learning anything? Tests run without proper sample sizes. Winners get called early. The same insights show up in retrospectives year after year because nobody built a system to retain and apply knowledge.
Good CRO programs treat every experiment as a way to remove uncertainty, not to hit a number. The wins are a byproduct. The real output is a sharper understanding of your customer.
One more thing worth naming: AI. Everyone in the CRO world is talking about it. Predictive testing, AI-generated copy variants, automated personalisation. Most of it is noise. The tools exist. The discipline to use them well doesn’t yet. AI in CRO is only as good as the question you ask it. And most teams are still asking the wrong questions.
What Good CRO Actually Looks Like in Practice
Good CRO is quieter than most people expect.
It looks like a team that spends more time on research than on building tests. Customer interviews. Session recordings. Heatmaps. Support ticket analysis. Talking to the sales team about what objections come up on calls. Mapping the actual journey people take, not the one you designed for them.
It looks like a hypothesis with a real mechanism behind it. Not “we think changing the headline will improve conversions” but “users in session recordings consistently scroll past the hero without pausing. We believe the current headline doesn’t match the intent they arrive with, so we’re going to test a version that leads with the outcome they’re searching for.”
That’s a testable claim. It has a reason. It has a direction. If it loses, you learn something. If it wins, you understand why.
Good CRO also looks like intellectual honesty about results. A flat test that shows no significant difference is still information. It’s the program telling you, this isn’t the constraint. Go look somewhere else.
The teams doing this well treat a null result with the same curiosity as a win. They ask: what does this tell us about our customers? What assumption did we have that this challenged?
That’s the discipline. That’s what CRO really means in practice… a system for removing what’s in the way, one honest question at a time.
Where to Start If You’re a SaaS Team
Stop looking at your homepage first. Start at the point of highest intent.
For most SaaS teams, that’s the trial sign-up flow, the onboarding sequence or the upgrade page. These are the moments where someone has already decided they might want what you offer. The friction here is expensive. Every person who drops at this stage was close. They raised their hand but something got in the way.
Map that journey. Watch recordings of real users. Find the moments where people pause, backtrack, or leave. Then ask, what would need to be true for them to keep going? Build your hypothesis around that.
Run one experiment at a time. Measure it properly… full sample size, full duration. Read the result honestly. Document what you learned. Then go again.
That loop; observe, hypothesise, test & learn is the whole game. Everything else is infrastructure built to support that loop running faster and more reliably over time.
You don’t need a massive team. You don’t need enterprise tooling on day one (believe me I know). You just need the discipline to ask good questions and the patience to wait for real answers.
Start there.
If you want to know whether your next experiment idea is worth running, try the Experiment Validator. It walks you through the criteria that separate strong hypotheses from guesses, before you spend time building a test that won’t teach you anything.





