What gets measured……my arse

We’re coming up to the ad industry’s annual celebration of effectiveness – The Festival of Creativity and Effweek; it’s all run or created or generally encouraged by the institute of Practitioners in Advertising (IPA).

According to the effectiveness gurus, Peter Field and Les Binet, communications are becoming less effective because we are focusing on short term returns over long term brand building.

There’s a deep irony in this. Field and Binet have tirelessly championed the  measurement of effectiveness.  But it’s exactly this emphasis on measurement which has led marketers to be more focused on evaluating campaigns – and that mostly means incorporating short (or at best short-ish) term metrics equating to return on investment.

The problem is long term measurement is really difficult.  Some years ago the IPA introduced a category in its effectiveness awards for ‘the longer and broader effects of advertising’.  It became a kind of holy grail in the effectiveness community and was won by some of he great advertisers of history like VW.  But it’s much easier to show that sales responded profitably to a campaign over the year when it appeared than to explain a brand’s sustained success over decades.  And typically people don’t stay in a job long enough to be that concerned about the long term.  So why bother?

It’s also more broadly true – beyond advertising.

Think about public policy issues like health.  We’re often reminded ‘what gets measured gets managed’ so metrics drive incentives and that’s what broadly speaking gets things done.  We can measure how many patients have been treated, how beds have been used and how long waiting lists are.  But the impact of better prevention or healthier lifestyles are hellish difficult to evaluate.  Maybe that’s why the authorities’ efforts in these areas have been so limp.

Yet we all know deep down it would be infinitely more efficient to improve our lifestyles than to treat illnesses.

Quite a conundrum, if you’re a policy maker.

It also explains why health systems are really sickness systems.  All about illness, not really about staying healthy at all.  And why they’re often accused of being driven by the interests of Big Pharma  – the companies who stand to benefit from treating more sick people rather than preventing them becoming sick in the first place.

Insert your own conspiracy theory here.

Advertising effectiveness 1: the dilemma

We all know it’s difficult to measure the effectiveness of advertising (and all the content that ordinary people also call advertising).  There’s a fundamental conundrum in it.

To evaluate marketing, you have to have an influence model.  Mostly this takes the form of something like a sales funnel.  A bit like this: you create awareness, leading to an attitude shift, greater brand consideration and ultimately purchases.  Sounds perfectly sensible.  This has been the prevailing wisdom since the twenties when Starch and Gallup started researching these things.

The problem is that’s not what happens.

Neuroscience tells us that brand decisions are made instinctively in the reptilian brain or by using heuristics.  We simply don’t go through that deliberative process the sales funnel describes, to learn the messages being communicated.  When did you last spend some quality time considering the competing claims of two brands in a category you don’t care about?

Behavioural economics demonstrates that we make our choices on all kinds of irrational grounds.  Future outcomes are discounted.  Loss aversion takes priority over likely gain.  Choices are dependent on how they’re framed.  It’s all very different to the straight line thinking that traditional economics and the sales funnel assumes.

So traditional methods are dodgy at best.  But they have what Charles Channon, many years ago, termed ‘organisational validity’.  Which means they hold sway with your boss even though they’re pants.

Neuroscience has thrown up some interesting research methods which short=circuit this imaginary reasoning process, so there’s some hope for improvement, but it tends to be used as an interesting add-on to existing approaches.  In the future, someone may make this work.

Until then there’s a choice between no explanation at all or one we know is wrong.  That’s the dilemma.