Letting go of yesterday

In advert­ising and market­ing, campaign perform­ance – and by exten­sion, success – is almost univer­sally meas­ured in the context of compar­ison to previ­ous campaigns.

The answer to “How did we do?” is invari­ably phrased in terms of “This campaign delivered X% more than that campaign, or “We had Y% less traffic this week than this week last year. We use monthly report­ing, dash­boards and trend model­ling around past perform­ance in order to judge how well we’re doing in the now.

This can danger­ously short-sighted, and in some cases worry­ingly self-destruct­ive.

Let’s say that your campaign did 20% better than your previ­ous one. Or maybe not – maybe it only delivered half as much traffic, or only a frac­tion of conver­sions. Is that good, or bad? Who decides?

So here’s the thing. The only thing which matters about how well your campaign or market­ing activ­ity performed is whether or not it performed as well as you wanted or expec­ted it to. How well your previ­ous campaign performed, or how much traffic your website received on the same day the previ­ous year, is completely inap­pro­pri­ate as a singu­lar yard­stick for success.

Danger­ous. In a world where monthly reports, campaign wash-ups, dash­boards and constant wrangling over have-the-numbers-gone-up-or-down-today – whether it’s an agency deliv­er­ing campaign summar­ies to a client, or an internal employee produ­cing status over­views – ‘the report’ is an insti­tu­tion, an expec­ted deliv­er­able, and the numbers it holds are the trig­ger for reward and promo­tion, or haranguing and thinly veiled threats.

Our entire world view is shaped by yesterday’s numbers.

But yesterday’s numbers aren’t a reli­able meas­ure of relat­ive perform­ance. Yesterday’s numbers aren’t a sign­post to whether today’s numbers are good, bad, or indif­fer­ent.

Sure, last month you did some similar market­ing, so it’s not unreas­on­able to expect that this month’s similar market­ing might produce similar results. You learned some stuff, too, and apply­ing those find­ings should make things work harder this time around. But your market­ing didn’t control the weather. Or the stock market. Or the phase of the moon. Your rela­tion­ship to your compet­it­ors, the rest of your vertical, and the wider market is (likely radic­ally) differ­ent than it was. Your chan­nel activ­ity – and that of your compet­it­ors – has likely changed, it’s closer to pay-day, and, oh – lots of people were prob­ably on holi­day around some awkwardly timed bank holi­days last month.

Compar­ing your perform­ance this month to last month in terms of the numbers you saw then assumes that you’re oper­at­ing in a market­ing vacuum. Noth­ing could be further from the truth.

I’m not saying that histor­ical perform­ance is completely irrel­ev­ant, or that it doesn’t have a role in setting targets (and, in some cases, it might be perfectly valid to set a target which is based on an uplift from histor­ical perform­ance). What I’m suggest­ing is that, when plan­ning a campaign or running on-going market­ing, you really need to have some clear defin­i­tions of what success looks like. For KPI setting, we should be work­ing back from the highest level of busi­ness to identify how much value needs to be gener­ated to cover costs, to main­tain profit margins, and to deliver required uplifts – and then get a little ambi­tious, whilst consid­er­ing and account­ing for as many influ­en­cing factors as possible. With those thresholds defined, previ­ous perform­ance should abso­lutely be considered when setting appro­pri­ate targets against KPIs, but as one of many factors, for example, as an estim­a­tion of market size and there­fore where you might expect to see dimin­ish­ing returns on bought media. It’s crim­inal to simply set targets as a percent­age uplift on previ­ous perform­ance.

Typic­ally, this kind of think­ing doesn’t happen because organ­isa­tions need to learn how to set effect­ive goals and KPIS. When you don’t have a clear defin­i­tion of what success looks like, with proper KPIs, targets and success thresholds, it’s easy to fall back onto “well, how did we do last time?” and to apply an arbit­rary uplift. Market­ing managers, stake­hold­ers and budget-hold­ers often fall back to this kind of think­ing because they’re not educated or empowered enough to do seri­ous, proper model­ling and fore­cast­ing; and because we’ve huge amounts of data, reports and spread­sheets, it’s very easy to just point at a series of numbers and demand more.

I’m going to go so far as to state that any regu­lar report­ing (note my usual distinc­tion between report­ing and analysis) which compares this month or this year against similar previ­ous peri­ods is, with the excep­tion of when it’s used as part of a wider, considered decision-making process, a waste of time.

In agency world, the whole concept of monthly reports make me deeply frus­trated. They exist to provide account­ab­il­ity and to chart the impact of work done, but in real­ity, they do little towards this, and often cause more harm than good in the long-run. Spread­sheets, which state that this KPI has gone up a little bit compared to last month, or that KPI has gone down a little bit compared to last year, do noth­ing but breed mistrust. The people produ­cing the reports take no value from them, because struc­tured report­ing of vanilla data doesn’t gener­ate or allow for insight and analysis. The recip­i­ents get noth­ing but unanswered ques­tions, because the report doesn’t say why things have changed – and it’s often simply because stuff is differ­ent from before. Except that’s not the kind of answer which engenders trust, so we collect­ively spend more time attempt­ing to create theor­ies as to what happened a year ago around a campaign which was influ­enced by a vast number of immeas­ur­able external influ­ences which suffi­ciently satisfy all the parties involved than we do actu­ally influ­en­cing the numbers. I’m sure that people in-house, work­ing within even the most data-driven organ­isa­tions, have the same chal­lenge.

Monthly reports should state what we hoped and expec­ted to achieve now, whether we did so or not, and what needs doing next. Compar­ing this month to last month’s data as a regu­lar or decision-making exer­cise is a poor use of time. Of course we want to achieve more than we did last year, but using that as an arbit­rary meas­ure of success is a risky altern­at­ive to setting delib­er­ate, considered targets for the now.

So let’s let go of yester­day. Let’s look at where we are now, and whether or not we’re where we want to get to. I’d imagine that most of the time, the answer will be “no, we’re not”.

So go do some­thing about it; use the time you would have spent look­ing through your analyt­ics pack­age at how many visit­ors your got this time last month.