Measuring ‘effectiveness’ depends on what it is you are looking for

If the Big Bang can be mistaken for pigeon droppings, imagine the insights to be found by poking around the results of marketing effectiveness surveys

Alan_MitchellIn 1963 two astronomers, Robert Wilson and Arno Penzias, got the chance to use a powerful radio telescope to scan the universe for radio signals. Wilson and Penzias wanted to do the best possible job and, being meticulously thorough geeks, they were annoyed by a persistent level of noise produced by the telescope.

The noise was the same wherever they pointed it, so they concluded there must be some malfunction. They spent months checking every tiniest element of the telescope’s working/ every electronic component, every bit of wiring and so on. But the noise persisted.

Then they turned their attention to the pigeons that used the telescope as a nest, theorising that the noise was created by “white dielectric material” on its surface. Pigeon droppings to you and me. Many months of campaigning later, their telescope was free of pigeons.

It took them over a year to eliminate every possible source of noise they could imagine, and they did indeed achieve a small reduction. But the background hiss persisted. So they made do. They knew their noise level was constant, so in future they would simply deduct it from all their measurements. Problem solved.

About a year later, one of them attended a conference where, through a friend of a friend, they heard about calculations by theoretical physicists investigating Big Bang theories of the origins of the universe. If the Big Bang had happened, the theorists calculated, with very good instruments, we should be able to pick a faint “echo” wherever we look. They calculated what this echo would sound like. It was exactly the same as the noise Wilson and Penzias had been working so hard to eliminate from their telescope.

What you see depends on what you are looking for. Wilson and Penzias weren’t looking at noise created by pigeon droppings. They were staring at the echo of the beginning of the universe. Their 600-word paper on the subject earned them the Nobel Prize.

The habits of shoppers

More recently, a team of academic geeks embarked on a much more mundane project. When it came to consumer supermarket shopping habits, they kept on reading that very high proportions of shopping decisions – perhaps as high as 70% – are made on the spur of the moment, in the store. But David Bell, Daniel Corsten and George Knox couldn’t find any hard academically sound research to back these claims up. Most of the references pointed to work done by the point of purchase advertising trade body POPAI and, well, no matter how good its research, it did have a vested interest in promoting a particular outcome.

Doing their research wasn’t easy. They couldn’t just analyse scanning data because that only reveals what people end up doing, not what their original intentions were. They needed to talk to people before they went shopping, and then compare their original shopping lists to what they ended up buying. For good measure, they also asked a lot of other questions such as their income levels, the size of their family, whether they were avid readers of newspapers, their attitude towards shopping, how they travelled to and from the store and so on.

What they discovered was intriguing. Some shoppers were indeed highly influenced by retailers’ in-store marketing activities – price promotions, gondola end displays, in-store posters and hoardings and so on. But others were more or less immune, and they fell into a number of different categories.

What is being measured?

Among those tending to ignore in-store marketing blandishments and keep more or less to their original shopping list included people who wanted a fast and efficient shopping trip – people who read newspapers, shoppers with older kids and empty nesters. The shoppers who were most influenced by in-store blandishments were the more well off, people who travel to store by car and those who like to look out for offers on the shelves.

Stepping back a bit, their research raised a more fundamental question about what we look for, and think we are measuring, when we measure the “effectiveness” of a particular marketing activity.

From the point of view of retailers, they introduce a particular initiative (a marketing “stimulus”) and then measure the resulting sales uplift (“the response”). This looks very scientific: some stimuli generate bigger responses than others. Retailers can analyse this (relatively) easily by crunching its scanning data. But it’s an average hiding some big differences. If you dig deeper into what’s going on, as Bell, Corsten and Knox did, you discover that the same “stimulus” doesn’t always generate the same “response”. In fact, depending on who the person is and what their current priorities are, the response can be very high or it can be close to zero. People “choose” to be influenced by the activity. Or not.

So what do marketing metrics really measure? Do they measure the “effectiveness” of a particular “stimulus”? Or do the numbers simply give us a glimpse of the effects – the “echo” if you like – of different peoples’ propensities to choose different stimuli?If the degree of take-up to your marketing stimuli depends on the propensities of the people exposed to it, then the notion of measuring the “effectiveness” of that stimulus in isolation to those propensities is, well, meaningless. In fact, it makes the blanket notion of “advertising effectiveness” meaningless. The data might tell you something about correlations. But it probably won’t tell you much about cause and effect. So you won’t learn much from it.

You can, for example, collect as much data as you like about how many people saw your ad, how often, and correlate this to subsequent sales. You can even try to parse this data by type of ad: was it based on emotional or rational appeals? Was it funny? And so on. But still, all you are doing is collecting correlations – a bit like the early Victorian botanists who collected thousands of specimens without a theory of evolution to make sense of them.

So next time you are measuring the effectiveness of a particular marketing activity, poke around under the bonnet and look behind the metrics model’s apparent sophistication to ask “does it assume that the same ‘stimulus’ always generates the same ‘response’? Are we measuring cause and effect, or correlations?”Because making do with observed statistical regularities is not the same as developing a real understanding of what works, and why. 

Recommended

DMGT ad revenues continue to fall

Marketing Week

Daily Mail and General Trust, owner of the Daily Mail, Mail on Sunday and Teletext, says that its advertising revenues for consumer titles have fallen 12 % in the past five months. The company has issued a trading statement that says for the quarter to March advertising revenues from the Associated Newspapers division are expected […]

Fulham FC appoints Table 19

Marketing Week

Premier League club Fulham has appointed Table 19 to handle its digital and direct marketing accounts. The integrated agency won the business after a two-way pitch against an undisclosed agency and will also work on developing the club’s brand. It will initially work on leveraging the club’s recent success on the field and pushing the […]

Comments

    Leave a comment