Proving an ads success doesnt mean youve been successful

A study shows marketers favour awareness and image metrics over ROI and market gain, but these have no link to business performance. By Alan Mitchell

Alan%20MitchellWe all know the story about the man who lost his keys in a dark and stormy night. He went looking for them under the light of a lamp post, not because that’s where he lost them but because that’s where he could see.

Nobody in their right mind would do such a stupid thing, but we are human after all. When we’re groping in the dark, we tend to latch on to anything. And in real life, the mistake rarely presents itself in such a stark manner.

How about this for example: “We can’t establish whether or how this advertising campaign has improved our company’s profitability, so we’ll focus on this measure (brand awareness, campaign recall, etc) instead”. We tell ourselves that the surrogate metric is, in some way, an indicator of what we are really interested in, but is it? Could it in fact be leading us further away from our destination – like the man and his lamp post?

Now, measuring results is hard enough, but measuring the relative effectiveness and usefulness of different metrics is even harder. You need an awful lot of detailed data, plus even more painstaking analysis, to get anywhere close to a decent answer. But here, the Institute of Practitioners in Advertising (IPA) happens to be sitting on a gold mine: more than 25 years’ worth of IPA Advertising Effectiveness Awards. Not just the winners, but all entries along with all their accompanying data, much of which was never even made available to the awards judges (for confidentiality reasons).

Recently the IPA let two researchers – Les Binet of DDB Matrix and marketing consultant Peter Field – loose on this data. The results, reported in Marketing in the New Era of Accountability, are quite intriguing.1

The first thing they did was analyse what data the case studies highlighted. For example: nearly all of them mention target markets and business objectives, but only a half actually clarify communication goals and less than a quarter dwell on metrics like market share by volume or value. Of the 880 papers covering national campaigns, just 39 demonstrated a return on investment for the campaign, while 204 demonstrated market share gains and 382 reported “intermediate” effects such as improved brand awareness and image.

And here’s the rub: when Binet and Field crunched the numbers they discovered something disturbing: “intermediate effects do not correlate reliably with business performance…there does not yet appear to be widespread reliable measurement of intermediate effects in a way that can predict subsequent behaviour and business effects”. This is important on two counts: first, the objectives marketers set for their initiatives and second, the usefulness of the measures they use to judge success.

ThinkingThe hard and soft rules
On the objectives front, Binet and Field distinguish between “hard” direct business objectives (such as profit or market share); “hard” consumer behaviour objectives (such as market penetration or loyalty); and softer intermediate objectives such as awareness, fame or quality perceptions. Their conclusions are sobering. For example, the databank evidence clearly suggests that campaigns which make increased profitability their number one objective tend to be far more successful. But only 7% of all the cases studied did this. On the other hand, one of the most popular objectives for campaigns was improved loyalty, yet the IPA evidence shows that these campaigns “underperform on almost every business metric”. Campaigns to increase market penetration were much less popular than loyalty focused ones…and also far more effective.

There’s plenty more along these lines. Over a half of all campaigns in the databank identify improved brand image as one of their objectives. Yet, despite their popularity, brand image campaigns rarely deliver business results. Looking at the other side of the coin, over half of all campaigns that were highly effective in business terms reported little or no improvement in brand image.

The same goes for awareness, the most common of all campaign objectives. Awareness is probably a flawed metric anyway (existing users of the brand tend to be far more aware of its advertising than non-users, so awareness data may not measure advertising effects at all). Either way, most awareness campaigns did not deliver business results. According to Binet and Field: “It is not the case that targeting awareness necessarily leads to more commercially effective marketing”.

But here’s the twist. At the same time, “awareness is a more achievable objective”. In other words, if you make “increased profits” an objective of your campaign your chances of proving success are reduced, but if you make “increased awareness” an objective it’s easier to move

the needle and thereby “demonstrate” success. “Intermediate metrics are seductive and widely used because they tend to move more quickly and impressively and are easier to link to marketing activity,” say Binet and Field. In other words, via these metrics, marketers are searching under lamp posts.

Binet and Field didn’t set out thinking they would discover anything like this. “As we went along it started to shriek out at us,” says Field. But the conclusion is hard to avoid.

Campaigns that focus on hard business measures need robust econometric analysis as a platform. Conducting such an analysis “enlightens teams about the mechanics of their market, leading to better informed decisions”, argues Field. “But there are some profound misunderstandings about econometrics. It’s seen as kind of scary and opaque”.

Instead, he argues, too many marketers end up focusing on metrics where it’s easy to prove you’ve moved the needle – even if moving this particular needle doesn’t actually deliver any business benefit. After all, he notes, “you’ve got to have something to put on a chart to present to your boss”. In this way, the great accountability bandwagon (“proving” success) is actually getting in the way of effectiveness (actually being successful).

Multiple Indicators
Binet and Field’s report offers some ways forward. Hard business objectives such as “reducing consumer price sensitivity” are better than soft intermediate ones, they say. Multiple objectives (with a clear order of priorities) are better than single objectives. And multiple indicators are better than single indicators. No single metric in isolation acts as a good predictor of performance, but if groups of four or five metrics all move in the right direction together, it usually means they’ve identified something.

But perhaps most important of all is the simple warning. By understanding the distinction between accountability and effectiveness and the ways in which the quest to demonstrate accountability can actually undermine effectiveness, marketers may have a better chance of looking for their keys where they’ve got a good chance of finding them.

1 Marketing in the New Era of Accountability,
Les Binet and Peter Field, published by WARC

Alan Mitchell