Measured Approach

As the sales promotions industry bids to improve standards of accountability, issues relating to confidentiality, trust and cost are making it hard for agencies to set benchmarks.

Talking about accountability to some clients and agencies is like watching the conductor of a brass band suck lemons: not only does it make his mouth pucker, it prevents him from doing his job. But however unpalatable accountability may be, it is becoming increasingly important for those in charge of promotions and incentives.

Nowhere is this more evident than in the realm of sales promotion. The continued redirection of marketing budgets from above to below the line suggests promotional marketing achieves something. The question is: what do you measure and how? For all the investment and thousands of people involved in sales promotion, it has only attracted one serious attempt to quantify its effects: professor Andrew Ehrenberg’s Repeat Buying study, most recently republished in 1998.

Nigel Pearcey, head of client services at Clarke Hooper Momentum, says: “It was misunderstood because it looked at price promotions. When the research came out, everybody said: ‘What’s that about?’ Price promotions don’t build a brand image or produce long-term results. Once you understood that, it made sense.”

One reason for the confusion about Ehrenberg’s work is that it appeared to prove that sales promotions have almost no effect. Any increase in sales during the promotional period was followed swiftly by a fall in sales volumes. The study showed that price promotions lead to all types of customer – from distributors to retailers and consumers – simply bringing forward purchases to take account of the offer. Brand preferences and frequency of use were virtually unaffected.

Price promotions continue to play an important role in marketing for a variety of reasons: achieving trial, fighting competitor entries and supporting distribution. But these are far from the be-all and end-all of what promotional marketing can achieve.

To understand how to measure the impact of other techniques, the Sales Promotion Consultants’ Association (SPCA) decided two years ago to develop a set of best practice guidelines around its Promotional Performance Assessment criteria. It started off by turning to Tim Ambler, professor of marketing ©

at the London Business School, who is working on Marketing Metrics – a research partnership between the London Business School, Marketing Society, Institute of Practitioners in Advertising and the Marketing Council. Pearcey says: “He is looking at the whole area of marketing, but no one discipline. We persuaded him to do a specific promotions project alongside that.”

Paper work

Following a survey of SPCA and Incorporated Society of British Advertisers members, Ambler prepared a Green Paper on the subject. This has been developed into a White Paper which is intended to provide a platform for discussions between clients and agencies.

Ambler has excluded price promotions from the project and focused on how to measure the effectiveness of sales promotions, how objectives should be defined and tracked and what obstacles exist between the best practice of a fully-accountable promotion and the general practice of operating according to “gut feelings”.

He points out: “Informal evaluation is the norm. If agencies and clients are happy with a promotion, for whatever reasons, so be it. A lack of clarity may be just as well. Sales promotions are perhaps like older people making love with the lights off: if they could see everything clearly, they would never go through with it.”

The Paper notes that it is extremely difficult to isolate the effects of any one marketing activity. Sales promotions are particularly affected by external factors: everything from competitor activity to the weather.

But Ambler points to some key areas of poor practice that make evaluation even harder. These include:

– Campaign objectives are often not included in the creative brief

– Revisions to the campaign during its development often alter the objectives set out at the beginning

– Other elements of marketing are not taken into account when setting the baseline against which the promotion is measured

– You should not expect all promotions to be profitable, since the real goal may be brand equity

– Payment by results should be part of the agency’s reward, but rarely is.

The last point underlines the complexity of evaluating promotional performance, and the resistance to it in some quarters. “The data was considered reliable for targets and later comparison, but not necessarily reliable for bonus purposes. Most people saw that as an argument against payment by results, but I see it as an argument in favour. Paying cash against numbers is the only real test. If they are not reliable enough for that purpose, they are deluding those who use them,” writes Ambler.

Payment by results

Interfocus managing director Matthew Hooper argues that agencies want to be able to measure their input: “When you have a debate about agency fees, the client often says ‘I am not going to pay that.’ But there is no justification for not doing so. Payment by results is an interesting thing to discuss because it begins to value the agency’s input by delivering a campaign that works.”

His argument about fees is that “if you pay the agency £1m but make £10m, you don’t care”.

But it is important to be able to justify the investment – to have a performance yardstick and to quantify areas of underperformance. Ambler also advocates adopting a risk/reward approach. The agency fee should cover the costs, while the bonus provides a performance-related incentive.

These arguments tend to come up against a barrier when you consider the information required to set benchmarks. This is generally either in the client’s possession or bought from third-party market research agencies. Issues relating to confidentiality, trust and cost have all stood in the way of a more business-like approach. One agency source says: “A lot of lip-service is paid, but it is difficult to get the full information to assess what has been done.

“As an agency, we are concerned that benchmarks are not set properly or addressed in a quantified way,” says Graham Griffiths, director of planning and strategy at the Promotional Campaigns Group. He acknowledges that data can be hard to source or expensive to obtain but, where the will exists, there are significant benefits. “Some clients are good and insist on measurements. We have one who gets an independent analyst involved,” says Griffiths.

“Their job is to pull in all the required data. We have been working on a repeat campaign for our client for more than three years and together we are able to create proper quantified targets and detailed reports.

“There is a simple return on investment table at the back, but we also know every aspect of the work. So, in the second year, we knew the starting point and could improve things. And in the third year we got better still,” he says.

This kind of detailed approach is the exception rather than the rule. As a benchmark, fewer than one in ten promotions are evaluated to this extent. There are structural reasons for this, in addition to cost and accessibility of data issues. Staff turnover on the client side is a major factor. Many brand managers remain in their post for less than 18 months and may be more concerned about improving their curriculum vitae than the bottom line. There are plenty of examples of agencies being told to get on with the job, rather than worry about whether it will work or not.

Equally, clients argue that agencies have not been active enough in ensuring creatives know what the campaign objectives are, as opposed to fishing for a great idea that might boost sales. Where a planning department exists, this is less likely to be missed. But it is far from common practice for the client to sign off the creative brief. All too often, it is kept as an agency secret.

Tracking mechanisms

For Lowe Direct managing partner Tony Watson, the accountability question is less in dispute. When his agency undertakes targeted promotions or direct marketing campaigns, tracking mechanisms tend to be built into them. Even so, he admits that, “isolating the effect can be difficult, particularly when integrated campaigns are the norm. Even when you ask people where they saw the ad, they often mention titles you have not placed it in, or simply choose the first option you give them.”

But, like everyone who works below the line, he acknowledges that greater precision is required. Often with current approaches “the objective is just to have a promotion”, says Watson. This does not help to support continued investment in sales promotion. Nor does it benefit marketers’ careers if all they can show is activity, rather than outcomes.

Industry-standard measures need to be developed and adopted to address the financial directors’ need for hard evidence, marketing directors’ desire for results, the agencies’ interest in performance bonuses, and even the creative directors’ need to be vindicated. Accountability is an irresistible trend right across marketing.

But, as Pearcey acknowledges, it must be tempered with a recognition that, just as customers are not predictable machines, neither are promotions. “You could analyse things to death but not get any improvements,” he says.