I’ve watched technically excellent experimentation teams plateau for years. In fact, I’ve also been in teams like that. And the reason for the plateau isn’t usually poor test design or a lack of statistical rigour – they’re excellent teams after all. It’s often because nobody above them cared enough to remove the blockers.
The thing is, if you’ve got a branding team that takes three weeks to sign off on a copy variant or a dev team with a six-month backlog and zero obligation to prioritise your requests… you’re pretty much destined to fail unless you can do something about it. A good sign that you’re in trouble is if your CMO who sees a results deck once a quarter still can’t articulate why the programme exists.
This is the thing that doesn’t get spoken about much. There’s plenty of content on how to run better experiments. But almost none on how to build the organisational conditions that let experimentation actually compound. So that’s what this is.
Why Programmes Stall (And It’s Not What You Think)
The standard explanation for a struggling experimentation programme is process failure. The backlog isn’t managed well. The hypothesis format is inconsistent. There’s no prioritisation framework. Fix those things and the programme will fly. That’s the advice. It’s mostly wrong, or at least incomplete.
Process problems are real (that’s something we help to specifically fix), but they’re usually downstream of a more fundamental issue:
the programme doesn’t have a powerful enough internal advocate.
When you have a senior stakeholder who genuinely sees the value, the process problems get solved quickly because there’s someone with authority making them a priority. Without that person, you can have a perfect process and still spend six months watching experiments die in a dev queue. Painful!
I’ve seen this pattern enough times to trust it. The teams running the most sophisticated programmes, the ones actually compounding learning over time, almost always have a specific person above them who runs interference. Someone who oversees the dev, product or the brand team. Someone who can walk into a resourcing conversation and make the experimentation programme’s needs land.
Most junior practitioners don’t think about this early enough. Basically because they don’t know what they don’t know. They focus on getting the programme right technically first, then try to get buy-in later. By then, they’ve already accumulated months of blocked experiments, missed opportunities, and a leadership team that’s seen inconsistent output and drawn the wrong conclusions about what the programme is capable of.
The Advice That Sounds Right But Doesn’t Work
Ask around and you’ll hear a few standard recommendations for building stakeholder engagement.
- Run company-wide ideas intake sessions.
- Create a results newsletter.
- Gamify the submission process so other teams feel involved.
These suggestions are well-intentioned. But in practice, most of them fail in the same way.
The intake form fills up briefly, then goes quiet. A handful of people submit ideas in the first few weeks, but… if the programme isn’t moving fast enough to close the loop, to actually run the experiment, share the result, and give the person who submitted the idea some feedback, then those people stop submitting. The form becomes obsolete. You end up with a system that creates the appearance of cross-functional engagement but without the substance.
The results newsletter fails for a similar reason. It works if the right people are reading it and care about what’s in it. But… more often, it becomes something that gets skimmed by the people who are already bought in and ignored by the people who aren’t. A dedicated CRO newsletter sent to a broad internal audience is a lot of effort to maintain for uncertain return. And if the results are framed statistically rather than commercially, the people who most need to understand the programme’s value won’t get it from reading it.
None of this means you should never communicate results broadly. It means you should be clear-eyed about what these tactics actually achieve versus what you’re hoping they’ll achieve. Making leadership care about your programme is not a comms problem that gets solved by sending more updates.
What Actually Unblocks a Programme
The work I did inside Samsung on governance was a great example of how to get traction in experimentation programmes. The goal wasn’t to make everyone enthusiastic about experimentation – that’s a very tough task… It was to identify and remove the specific blockers that were preventing experiments from launching and completing. The question wasn’t “how do we get the whole organisation excited?” It was “who has the authority to solve this specific problem, and what do they need to see to act on it?”
That reframe matters. When you treat stakeholder buy-in as a broad cultural goal, you end up doing a lot of activity that doesn’t translate to actual unblocking. When you treat it as a targeted political problem, you get specific about who you need, what they control, and what they care about. Then you go and make a case to that person, in their language, about why the programme deserves their attention.
Senior stakeholders, the people who can actually unblock development resource or override a branding team’s objections, don’t care about p-values. They don’t care about conversion rate lifts in isolation. They care about revenue, risk, and whether decisions are being made on evidence or guesswork. The case you make to them has to be framed in those terms.
This is honestly where most well-intentioned, practitioners get it wrong. They present experiment results as statistical outcomes. Variant B outperformed control at 95% confidence with a 4.2% lift in conversion rate. That sentence lands fine in a CRO team meeting. But in a leadership meeting, it gets nodded at and forgotten – it just doesn’t speak in their language. The version that lands better is:
we ran this test before shipping the change, the variant would have cost us an estimated £180k in annual revenue if we’d shipped it untested, but we caught it with our experimentation programme.
That’s a risk management story. Leadership understands risk management.
Making Losses Land as Wins
This reframe, from wins-focused to risk-management-focused, is one of the more powerful things you can do when building a case upwards. Most practitioners lead with the wins. Here are the tests that lifted revenue. The problem is that wins require the listener to do some inferential work. They have to understand that the lift happened because of the programme, that it wouldn’t have happened otherwise, and that the programme therefore deserves continued investment. That chain of logic often doesn’t survive contact with a busy senior stakeholder.
Losses make the case more concretely. There’s a situation I keep coming back to involving The Wine Society. The specific detail isn’t the point; the structure is. When you can quantify what a losing variant would have cost if it had been shipped without testing, you have a number that’s impossible to ignore. It’s not a projected gain. It’s a specific, avoidable loss that the programme intercepted. That kind of story doesn’t require the listener to believe in experimentation philosophically. It just requires them to believe in not losing money.
The reason we experiment is because we don’t know the answer.
A test that loses is the answer to a question you were previously just guessing at. Getting leadership to understand that is half the battle, and the way to get them there is through specific numbers, not through arguments about best practice.
The Role of the Impact Scorecard and Strategic Metric Map
Two frameworks are worth naming here because they directly address the problem of making a programme’s value visible to people who aren’t close to it.
The Impact Scorecard is a structured way of summarising what your programme has actually produced:
- revenue protected
- decisions informed
- risks caught
- velocity of learning over time.
It translates experimentation output into commercial language. The reason this matters is that the people who control experimentation budgets are typically not the people who understand what a confidence interval means. They need a different kind of readout. The scorecard gives them one.
The Strategic Metric Map is about alignment upstream. Most experimentation programmes operate with a set of tactical metrics:
- conversion rate
- add-to-cart rate
- checkout completion
Those metrics are real and they matter, but they can be disconnected from what the business is actually trying to achieve at a strategic level. A subscription business optimising for sign-up rate without accounting for average membership duration or churn is optimising for the wrong thing entirely. The metric map forces the question:
what does success look like to the people running the business, and how do our experiments connect to that?
When you can draw that line clearly, the programme becomes legible in a way that tactical reporting never achieves.
Together, these two tools do something specific. They give the experimentation programme a way to speak to leadership in leadership’s language, without watering down the substance of what the programme is doing.
Tactical Moves That Actually Work
If not a dedicated CRO newsletter, then what? The answer most senior practitioners land on is embedding results into the comms channels that already have traction.
If there’s a monthly or quarterly business review, your results go in there. Or if there’s a leadership update email, your commercial impact number goes in there.
You’re not creating a new habit for people to form. You’re inserting into the habits they already have.
The content changes too. You’re not reporting “experiment results.” You’re reporting on decisions made with evidence this month, risks caught, and the commercial value of the programme’s output. One paragraph. Two numbers. One specific story if you have it. That’s the format. It’s brief enough to be read and specific enough to be remembered.
On the question of who to target:
the most useful stakeholder is usually not the one who’s most enthusiastic about experimentation. It’s the one who controls the teams that block you most often.
Figure out where your bottlenecks actually are, then work out who has authority over that function, then make a targeted case to that person. That’s a different approach to the “let’s get everyone excited” strategy, and it’s considerably more effective.
A junior practitioner tends to optimise for broad awareness. The logic is that if enough people know about the programme, the support will follow. An experienced practitioner identifies the two or three specific people who could materially change what the programme is able to do, and focuses energy there. Fewer relationships, managed more deliberately, producing more leverage.
Where to Start
If your programme is currently stalling, the first diagnostic question is simple. Who could make the programme’s biggest blocker disappear if they decided to prioritise it? Name that person. Then ask what they care about and what they need to see to act. If you can’t answer that second question clearly, that’s your first piece of work.
The second question is about how you’re currently reporting results. If you’re leading with statistical outcomes rather than commercial ones, change that before your next leadership touchpoint. You don’t need to overhaul everything. You need one clear commercial number and one specific story about a decision the programme made possible.
The third question is about your metric alignment. If your programme’s success metrics aren’t visibly connected to the metrics the business reports against, you have a legibility gap. Close it explicitly. Don’t assume the connection is obvious to people outside the team.
If you want a structured way to do this, start with the Impact Scorecard which is built exactly for this problem. It gives you a framework to translate your programme’s output into commercial language, the kind that makes sense to the people who control your budget and decide whether the programme keeps running. It’s the most direct tool I know for making an experimentation programme’s value legible to the people who most need to understand it.






One Comment