Measure of success

Lord Levershulme famously said that he knew that half his advertising worked, he just didn’t know which half. He made this statement in the Fifties and the sentiment still haunts marketing directors today.

Despite what anyone says, no marketing discipline, including direct marketing, has at its fingertips a fail-safe model which can predict and evaluate exactly how effective a campaign has been.

It’s relatively easy to tell whether brand-building campaigns have built awareness – but have they driven sales? Direct marketing can certainly measure response to a campaign – but if the response is disappointing, how easy is it to isolate the offending element?

The discipline that seems to suffer most from this affliction, however, is sales promotion – and for a number of reasons. The sector suffers largely from poor definition – a price promotion is not the same as a value-added promotion – but the two are often lumped together, even when it comes to allocating budgets.

Unlike most other marketing disciplines, retailers are also able to wield a big stick and frequently force manufacturers to run promotions in order to drive in-store traffic. If manufacturers demur, the threat of deselection is not far behind.

What happens in the case of McDonald’s, or more famously Hoover, where the promotion is so successful that the uptake exceeds all expectations, forcing the promotion to implode? The promotions turned out to be disasters because they were so effective.

In the face of this, the Sales Promotion Consultants Association is tackling the issue by linking into a joint initiative with the London Business School and Cranfield

University, called The Marketing Metrics Project.

The project is a research partnership between the LBS, The Marketing Society, the Institute of Practitioners in Advertising and The Marketing Council to research and report “best practice in measuring marketing effectiveness and shared language”. The project covers a 30-month period from March 1997 and specifically looks at marketing as a whole, not elements of the marketing mix. However, the SPCA has linked into the programme to extend Marketing Metrics’ project to cover specific promotions measurement.

Nigel Pearcy, head of client services at Clarke Hooper and director responsible for trade practices at the SPCA, says: “Previous studies on evaluation in the promotions arena have been limited and misreported. About four years ago there was a report that claimed sales promotion does not do anything for brands. What it was actually talking about was price promotions. “What we want to do is look at evaluating added-value promotions. There has never been a proper study of this.”

The initial result of the Sales Promotion Metrics Project will be a green paper, available at the SPCA annual meeting in July, which will bring together promotions knowledge, objectives setting and evaluation as well as common definitions and “proposals for shared language”. The green paper will outline best practice for both agencies and clients.

Says Pearcy: “If I’m honest, the direct marketing industry has been much better at evaluating itself. The growth of direct marketing is a result of the industry being able to say ‘if you do this, that will happen’.

“But in the promotional area, objectives are set, the campaigns are carried out, but with little regard to long-term results. There needs to be some sort of more commonly understood evaluation tool.”

This does not mean that most agencies do not have their own evaluation tools which they use to assess campaigns – Clarke Hooper certainly does. “But”, says Pearcy, “when you get into new client areas, there is not a lot of background knowledge you can present to them. The best you can do is play with other clients’ numbers.”

Professor Robert Shaw is visiting professor at Cranfield University and is involved in the Sales Promotion Metrics Project. Two years ago he was commissioned by the Financial Times to write a report on marketing accountability, and it was then that he discovered that little had been written on the subject.

“I found that what a lot of companies mean by evaluation is very short term and very narrow. Few people are involved in systematically tracking both the input and output of marketing,” says Professor Shaw.

At the same time, he surveyed 130 companies to find out what kind of promotions measurement they did and what they did with those results. “This showed there is a lot of aspiration to conduct improved measurements, but in practice, measurements range from the sophisticated to those that do next to nothing.”

Professor Shaw is unimpressed by the companies which rant on about how much more they must understand their customers – but then invest in computer system-based customer satisfaction surveys, “which are mainly self administered and very poor. But the board wants a number and that’s how it gets it”.

He says not enough money is ploughed into research and tracking conducted by research companies. “In this country, tens of billions of pounds are spent on advertising. The total UK research spend is 500m and only a minuscule amount of that will be spent on tracking and measuring promotions.”

Deputy managing director of the consumer division of Research International Danielle Pinnington supports this view. “There is a definite increase in the proportion of time and effort spent on promotions, but we think research is an area people should get much more involved with.”

She does say, however, that some of her company’s work is motivated by the clients’ need to provide evidence to retailers that they “should pull back on promotions because they think it could be harming their categories”. Pinnington says retailers put pressure on manufacturers to launch link-saves, multisaves or price cuts – all of which generate store traffic.

One person interviewed for this report, who didn’t want to be named, says: “People say they are being bled dry by retailers which demand promotional support. These promotions have nothing to do with the customer, but retailers have suppliers over a barrel.”

Professor Shaw probably does himself no favours by saying: “A sad fact of life is that there is a mountain of good research, done in the US, which shows the bulk of sales promotion does not generate long-term growth.”

He admits that this research includes price promotions and concedes that “the industry needs to take some of these areas apart and examine them in far more detail.”

That said, Professor Shaw is far from dewy- eyed about the reality of what increased measurability can achieve.

“You are never going to be able to predict success with 100 per cent accuracy. There is a lot of evidence which shows that attempts at anything new in marketing have very low success rates, or about 90 per cent failure rates. However, if modelling and predicting can help you to shift the odds of success from ten per cent to 15 per cent, then that’s all you need.

“In reality, if you introduce a new promotion or line extension, there is a huge failure rate. Our message is that there is an inevitably high failure rate, not because promotions are bad, but because there is so much competition. If you can shift your success from ten to 15 or even 20 per cent, profitability will be radically improved.”

If water-tight predictability is not available, there are at least increasingly sophisticated monitoring and tracking systems.

Information Resources has a number of tracking tools available for clients which record a great deal of useful information. Some of the data recorded includes: the product promoted; when it was promoted; where it was promoted; and how it was promoted. This information is added to weekly retail scanning data tapes received from collaborating retailers and processed for clients. IR also provides a number of customised tracking packages.

“But we don’t monitor consumers’ actual behaviour,” says IR business development director Bruce Dove. “EPOS purchase data is the same as consumer behaviour because it tells you who has bought what, but it doesn’t tell you if the promotion stimulated trial or stimulated average rate of purchase. It doesn’t say anything about loyalty or loyalty to a category or what the longer term effect of a promotion is.

“We can measure short-term impact, but we are not good at measuring long term. When we try to understand the longer term effect of campaigns, the ingredients we have don’t lend themselves to that sort of analysis,” says Dove.

But, even here, Dove is sensing change. Although he is frank about the limitations of the IR approach, this doesn’t stop clients asking the company to dig deeper into what it has, to try to get a clearer picture of how their promotions should look.

“Our biggest projects are those software-based solutions where clients have asked us to examine all the information that has been gathered on them and to try to project forward with the analysed data.

“The reason why the company is being asked to do this is because promotions are becoming increasingly costly to manufacturers. No-one really monitors or evaluates promotions in any way. Even the large companies are only doing elements of it.

“What we are seeing now is the systematisation of information. These projects are huge, but the return on them will be fantastic. If the results of these mean that manufacturers can execute just slightly better promotions, the financial returns will be enormous.”