In advertising and marketing, campaign performance – and by extension, success – is almost universally measured in the context of comparison to previous campaigns.
The answer to “How did we do?” is invariably phrased in terms of “This campaign delivered X% more than that campaign”, or “We had Y% less traffic this week than this week last year”. We use monthly reporting, dashboards and trend modelling around past performance in order to judge how well we’re doing in the now.
This can dangerously short-sighted, and in some cases worryingly self-destructive.
Let’s say that your campaign did 20% better than your previous one. Or maybe not – maybe it only delivered half as much traffic, or only a fraction of conversions. Is that good, or bad? Who decides?
So here’s the thing. The only thing which matters about how well your campaign or marketing activity performed is whether or not it performed as well as you wanted or expected it to. How well your previous campaign performed, or how much traffic your website received on the same day the previous year, is completely inappropriate as a singular yardstick for success.
Dangerous. In a world where monthly reports, campaign wash-ups, dashboards and constant wrangling over have-the-numbers-gone-up-or-down-today – whether it’s an agency delivering campaign summaries to a client, or an internal employee producing status overviews – ‘the report’ is an institution, an expected deliverable, and the numbers it holds are the trigger for reward and promotion, or haranguing and thinly veiled threats.
Our entire world view is shaped by yesterday’s numbers.
But yesterday’s numbers aren’t a reliable measure of relative performance. Yesterday’s numbers aren’t a signpost to whether today’s numbers are good, bad, or indifferent.
Sure, last month you did some similar marketing, so it’s not unreasonable to expect that this month’s similar marketing might produce similar results. You learned some stuff, too, and applying those findings should make things work harder this time around. But your marketing didn’t control the weather. Or the stock market. Or the phase of the moon. Your relationship to your competitors, the rest of your vertical, and the wider market is (likely radically) different than it was. Your channel activity – and that of your competitors – has likely changed, it’s closer to pay-day, and, oh – lots of people were probably on holiday around some awkwardly timed bank holidays last month.
Comparing your performance this month to last month in terms of the numbers you saw then assumes that you’re operating in a marketing vacuum. Nothing could be further from the truth.
I’m not saying that historical performance is completely irrelevant, or that it doesn’t have a role in setting targets (and, in some cases, it might be perfectly valid to set a target which is based on an uplift from historical performance). What I’m suggesting is that, when planning a campaign or running on-going marketing, you really need to have some clear definitions of what success looks like. For KPI setting, we should be working back from the highest level of business to identify how much value needs to be generated to cover costs, to maintain profit margins, and to deliver required uplifts – and then get a little ambitious, whilst considering and accounting for as many influencing factors as possible. With those thresholds defined, previous performance should absolutely be considered when setting appropriate targets against KPIs, but as one of many factors, for example, as an estimation of market size and therefore where you might expect to see diminishing returns on bought media. It’s criminal to simply set targets as a percentage uplift on previous performance.
Typically, this kind of thinking doesn’t happen because organisations need to learn how to set effective goals and KPIS. When you don’t have a clear definition of what success looks like, with proper KPIs, targets and success thresholds, it’s easy to fall back onto “well, how did we do last time?” and to apply an arbitrary uplift. Marketing managers, stakeholders and budget-holders often fall back to this kind of thinking because they’re not educated or empowered enough to do serious, proper modelling and forecasting; and because we’ve huge amounts of data, reports and spreadsheets, it’s very easy to just point at a series of numbers and demand more.
I’m going to go so far as to state that any regular reporting (note my usual distinction between reporting and analysis) which compares this month or this year against similar previous periods is, with the exception of when it’s used as part of a wider, considered decision-making process, a waste of time.
In agency world, the whole concept of monthly reports make me deeply frustrated. They exist to provide accountability and to chart the impact of work done, but in reality, they do little towards this, and often cause more harm than good in the long-run. Spreadsheets, which state that this KPI has gone up a little bit compared to last month, or that KPI has gone down a little bit compared to last year, do nothing but breed mistrust. The people producing the reports take no value from them, because structured reporting of vanilla data doesn’t generate or allow for insight and analysis. The recipients get nothing but unanswered questions, because the report doesn’t say why things have changed – and it’s often simply because stuff is different from before. Except that’s not the kind of answer which engenders trust, so we collectively spend more time attempting to create theories as to what happened a year ago around a campaign which was influenced by a vast number of immeasurable external influences which sufficiently satisfy all the parties involved than we do actually influencing the numbers. I’m sure that people in-house, working within even the most data-driven organisations, have the same challenge.
Monthly reports should state what we hoped and expected to achieve now, whether we did so or not, and what needs doing next. Comparing this month to last month’s data as a regular or decision-making exercise is a poor use of time. Of course we want to achieve more than we did last year, but using that as an arbitrary measure of success is a risky alternative to setting deliberate, considered targets for the now.
So let’s let go of yesterday. Let’s look at where we are now, and whether or not we’re where we want to get to. I’d imagine that most of the time, the answer will be “no, we’re not”.
So go do something about it; use the time you would have spent looking through your analytics package at how many visitors your got this time last month.