A SaaS company came to me last year. Good product, decent traffic, conversion rates that had barely moved in eighteen months. They wanted a testing roadmap. I told them I needed to do an audit first. Their head of growth looked at me like I’d asked to audit their furniture.

Six weeks later we’d found a pricing page that loaded differently in Firefox, a free trial flow with seven steps where three would do, and onboarding copy that described features instead of outcomes. None of that was on anyone’s radar. They’d been running A/B tests on button colours.

That’s what a CRO audit actually does. It tells you where to look before you start running tests. Skip it and you’re optimising in the dark.

What a CRO Audit Actually Is

A CRO audit is a structured diagnostic of your conversion funnel. Not a list of quick wins. Not a teardown. A diagnosis. The same way a doctor doesn’t prescribe before they examine, a CRO audit doesn’t produce a test roadmap before it produces a clear picture of what’s broken and why.

For SaaS, that funnel usually runs from first visit through to activation, sometimes into expansion. The audit covers every stage where users drop. That includes the website, the sign-up flow, the onboarding sequence, and in some cases the in-product experience leading to first value.

What separates a real audit from a heuristic review someone knocked together in an afternoon is evidence. An audit should be pulling from at least three sources: quantitative data (where are people leaving), qualitative data (why are they leaving), and technical checks (is the product even working as intended). Most audits I see rely on one of the three. That’s how you get a list of opinions dressed up as a diagnosis.

The CRO Audit Checklist: What to Cover

There’s no single template that works for every SaaS product. But there is a consistent set of areas that any serious audit needs to work through.

Analytics and Data Infrastructure

Before you interpret any data, check whether it can be trusted. This is the part teams consistently skip because it feels boring. It’s not boring when you’ve built three months of strategy on top of a broken GA4 setup.

Check that your analytics are firing correctly across the full funnel. Check for goal duplication, self-referral traffic, cross-domain tracking gaps if you’re running the marketing site and app on different domains. Check that your funnel events match what’s actually happening in the product. If your data team and your product team disagree about what “activation” means, you’ll find that here.

Check your sample sizes and date ranges. Too short a window and you’re reading noise. Too long and you’ve got seasonality muddying the picture.

Traffic and Acquisition

Conversion rate is a ratio. Numerator over denominator. A lot of “conversion problems” are actually traffic problems: the wrong people landing in the funnel in the first place.

Look at conversion rate by traffic source. Paid search visitors converting at 0.4% and organic at 3.2% is a paid targeting problem, not a landing page problem. Look at device breakdown. SaaS products frequently have wildly different conversion rates on mobile versus desktop, and teams ignore this because “our users are on desktop.” Sometimes. Check it rather than assume it.

Look at new visitor versus returning visitor conversion rates. If returning visitors aren’t converting significantly better, something is broken in your nurturing or your retargeting isn’t working.

Landing Pages and Messaging

This is where writing changes matter more than almost anything else, and it’s the thing most audits give the least attention to. Teams will obsess over button placement and completely ignore whether the page is actually saying the right thing to the right person.

Look at message match first. Does the ad copy or the organic ranking content match what the landing page says? Mismatched expectations are one of the fastest ways to lose someone.

Look at the hierarchy of information. What does someone read first, second, third? Does that order make sense given what they need to know to take the next step? Read the page out loud. If it sounds like a press release, it needs rewriting.

For SaaS specifically, look at whether the page speaks to outcomes or features. “Automated reporting” is a feature. “Stop spending three hours building your weekly report” is an outcome. These are not interchangeable.

Sign-Up and Onboarding Flow

This is usually where the most significant drop-off happens for SaaS. Someone was interested enough to click “Start free trial.” Something in the next few steps lost them.

Map the flow step by step. Count the fields. Count the decisions. Every additional field in a sign-up form has a cost. The question isn’t “would it be useful to know this?” The question is “do we need this now, before the person has any reason to trust us?”

Check what happens after sign-up. The confirmation email, the first in-product screen, the onboarding sequence. Measure drop-off at each step. You’re looking for where the cliff is, not just that there’s a cliff somewhere.

Check the copy in empty states, error messages, and loading screens. These are easy to overlook and they’re often where users make their decision about whether the product is worth the hassle.

Pricing Page

For SaaS, the pricing page is usually in the top three most-visited pages. It’s often one of the least optimised.

Look at what happens when someone arrives on pricing. Are they making a direct comparison between plans, or are they trying to decode a table that requires a spreadsheet to interpret? Is it clear what happens at the end of the free trial? Is the trial genuinely frictionless or is a credit card required?

Check whether social proof appears on or near the pricing page. Reviews, logos, security badges. The closer trust signals are to the decision moment, the more work they do.

Technical and Performance Checks

Page speed. Cross-browser rendering. Mobile experience. Form validation behaviour. These aren’t glamorous. They also directly suppress conversion and get ignored because they’re someone else’s job.

Run your key pages through Core Web Vitals. Check the sign-up flow on a real mobile device, not just in browser dev tools. Submit the form with an intentional error and see what the validation message says. Check whether autofill works. These are five-minute checks that routinely surface real problems.

Qualitative Evidence

Data tells you where the problem is. Qualitative research tells you what the problem actually is. An audit without both is half an audit.

Session recordings, exit surveys, user interviews, live chat transcripts. You’re looking for the language people use when they describe their problem, the objections that come up repeatedly, the moments where recordings show users pausing or going back or rage-clicking.

One specific thing to look for in SaaS audits: the gap between the job people think your product does and the job it actually does. That gap almost always shows up in the language, if you look for it.

Common Mistakes Teams Make During a CRO Audit

The most common mistake is running an audit as a list-generation exercise. Teams work through the pages, note what looks wrong and produce a backlog. The audit becomes a to-do list rather than a diagnosis. Those two things produce very different outcomes.

A to-do list tells you what to fix. A diagnosis tells you why it’s broken, which changes what you fix and how. The company I mentioned at the start had a list of “issues” already. What they needed was to understand why their free trial conversion hadn’t moved. That took synthesis across multiple data sources. A list wouldn’t have got there.

The second mistake is treating all findings equally. Not everything an audit surfaces is equally worth fixing. Some things will move the needle significantly. Others are genuine issues but they’re not your conversion problem, they’re a UX problem, a brand problem, or a product problem. Part of the audit’s job is making that distinction.

The third mistake is running audits too infrequently. An audit is not a one-time project. SaaS products change constantly. Traffic mix shifts. New features get released. The funnel evolves. Auditing once and then testing for eighteen months on findings that are a year old is a good way to waste a year.

The fourth, and this one matters: using the audit as a reason to run tests when the finding doesn’t actually need a test. Some things you discover in an audit are just broken. A form that doesn’t work on Firefox doesn’t need an A/B test. Fix it. Save your experimentation budget for the questions where you genuinely don’t know the answer.

How to Prioritise What You Fix

You’ve run the audit. You have findings. Now you need to decide what to work on first.

The standard frameworks here are PIE (Potential, Importance, Ease) and ICE (Impact, Confidence, Ease). Both are useful as forcing functions: they make you score your ideas rather than argue about them. The limitation of both is that they don’t account for risk. A high-scoring idea on ICE might be a significant change to a high-revenue flow. That warrants more caution than its score suggests.

The Risk Ranker™ model adds a risk dimension. You’re asking not just “how much could this help” but “how much could this hurt if we get it wrong.” A pricing page test with a high potential impact and a low confidence hypothesis carries a different risk profile to a test on a secondary landing page. Both might score similarly on ICE. They shouldn’t be treated the same way.

In practice, after an audit, I sort findings into three buckets before scoring anything. Fixes, meaning things that are broken and just need fixing. Tests, meaning things where the evidence suggests a problem but the solution isn’t clear. And investigations, meaning areas where the data is ambiguous and needs more qualitative work before we can form a hypothesis.

Only the middle bucket goes into the testing roadmap. The fix bucket goes to the development backlog. The investigation bucket goes back into research. This keeps the testing program clean and focused on genuine unknowns rather than questions that aren’t actually questions. But why is that the case.

Once you have your test candidates, score them. But score with a team, not alone. A single person’s ICE score is just their opinion with numbers attached. When you score collaboratively, the disagreements are where the useful conversations happen.

A Note on SaaS-Specific Complexity

SaaS audits are harder than ecommerce audits for a few reasons worth naming directly.

The conversion event is rarely a single moment. In ecommerce, someone buys or they don’t. In SaaS, there’s a chain:

  • visited the site
  • started a trial
  • completed onboarding
  • reached activation
  • converted to paid
  • expanded

Each link in that chain can be the problem. An audit that only covers the website and ignores activation is an incomplete audit for a SaaS product.

Attribution is messier. SaaS buyers often research across weeks, touch multiple channels, and involve more than one stakeholder. Conversion rate calculations that treat every visitor independently miss the multi-session, multi-device reality of B2B SaaS buying behaviour.

Freemium and free trial models each require different audit lenses. A freemium audit is really an activation and upgrade audit. A free trial audit is primarily a time-to-value audit. Don’t apply the same template to both without thinking about which conversion event actually matters for the business model.

Where to Start

If you haven’t run an audit before, the temptation is to start with analytics because it feels structured and safe. Start with qualitative instead. Spend a morning watching session recordings on your sign-up flow. Read your last fifty support tickets. Pull your NPS verbatim responses and read them in one sitting.

That work will tell you where to look in the data. It will make everything else in the audit faster and more focused. Evidence first, in the order that builds context fastest.

If you have run an audit before and you’re looking to build this into a repeatable practice, the structure matters as much as the content.

  • Who owns the audit?
  • How often does it happen?
  • How does it feed into the testing roadmap?

Those are programme questions, not audit questions, but they determine whether the audit produces anything useful or just sits in a slide deck.

Once you’ve worked through your findings and you’re ready to build your testing roadmap, the question becomes, which of these ideas is actually worth running as an experiment? That’s a different skill to auditing and it’s where a lot of teams lose momentum. If you want a structured way to evaluate whether your test ideas are properly formed before you commit resource to running them, the Experiment Validator will walk you through the criteria that separate a well-structured hypothesis from an idea that’s going to waste everyone’s time.

Kyle Newsam

An optimizer by trade & lifestyle. Truly any experience or interaction becomes an experiment & something I can learn from. Currently, moving around the globe working from the coolest locations that the younger me could never have imagined.

Leave a Reply