Ritson is right, pre-testing has transformed marketing effectiveness

The effectiveness expert and author of The Long and the Short of It explains how pre-testing has led to more effective ads, focusing on emotional response.

Source: Shutterstock

In his article in Marketing Week last week, Pre-testing ads is not divisive, it’s a no-brainer, Mark Ritson presented a very powerful argument for the use of quantitative ad testing, with which I agree entirely. He also made the point that pre-testing has changed – something Mark glossed over, so I’d like to amplify his point with the aid of some remarkable data from the ever-handy IPA effectiveness databank.

When I was a young strategic planner in the early 1980s, the dominant pre-testing model was very different from today’s. It was based on a rational persuasion model focused on the impact of the ad on purchase intention metrics.

It was found that the best way to do well in these tests was to feature ‘new news’ in each and every ad; to give consumers a reason to justify intention to purchase. Advertising designed to generate an emotional brand response simply didn’t deliver as powerfully in the short term.

It didn’t matter how trifling this news was; so long as it was there, respondents found it easy to claim it would influence their choice. Nor did it matter that this resulted in some very dull advertising that, in the real world of media exposure, would generate little interest or attention outside of the immediate moment of purchase – in the testing situation, attention was guaranteed. Nor did it matter that respondents would have forgotten this advertising very rapidly after seeing it – their intentions were recorded very shortly afterwards.

Pricing power has always been a key metric associated with powerful emotional brands, and the improvements in pretesting have seen a radical turnaround in the apparent ability to promote pricing power.

And at that time, outside of those who had worked in both direct response as well as brand advertising, there was little appreciation that the drivers of short-term response were very different from those of long-term demand. So the effect of this approach to pre-testing was to promote advertising that would primarily generate very short-lived effects, as it turned out, at the expense of long-term effectiveness.

So, when Les Binet and I wrote The Long and the Short of It, our research revealed that pre-tested campaigns outsold non-pre-tested ones over the short term, but thereafter it was the non-pre-tested campaigns that dramatically outsold the pre-tested ones. Because they were more likely to be emotional in nature, the full impact of the non-pre-tested campaigns took longer to manifest.

This Much I Learned: Les Binet and Peter Field on 10 years of The Long and the Short of It

A new methodology

But there was science of a sort behind the old model and it would take a long time to convince the marketing world that there was better science now available.

The pioneer of the new science in 2009 was Brainjuicer, now rebranded System1. Its emotional advertising response approach was validated (blind) among IPA case studies whose long-term business effectiveness was known (for transparency, I was involved in this study).

We found the campaigns that did best in System1’s pre-testing methodology were considerably more effective than those that tested worst. As a further test, in the same way, the study looked at the long-term effectiveness of campaigns that did best in the dominant pre-testing methodology of the time versus those tested worst: these pre-test winners were very considerably less effective.

Happily, with the help of the late Daniel Kahneman and his 2011 book Thinking, Fast and Slow, the tide gradually started to change in favour of the new model. Other research companies climbed on board, bringing different approaches to validation but with similar conclusions. And now we find ourselves in a much healthier pre-testing environment.

Against a backdrop of generally falling effectiveness that is primarily connected to media usage, pre-tested campaigns pull ahead of the others.

The IPA data shows how, over the last 25 years or so, quantitative pre-testing has evolved from apparently promoting ‘reason why’ advertising to encouraging emotional advertising.

This in itself would help increase effectiveness, but in addition, the new consumer response metrics correlate strongly with long-term sales effectiveness. We see the impact of the developments in pre-testing in the reported market share impacts of pre-tested and non-pre-tested campaigns. Against a backdrop of generally falling effectiveness that is primarily connected to media usage, pre-tested campaigns pull ahead of the others.

But the benefits of the pre-testing revolution go beyond share growth. Pricing power has always been a key metric associated with powerful emotional brands, and the improvements in pretesting have seen a radical turnaround in the apparent ability to promote pricing power.

And other metrics tell the same story. So clearly Mark is correct on two counts: pre-testing has changed and it is now a no-brainer. He is also right to make an issue of pre-testing: among the IPA case studies, the use of quantitative pre-testing has fallen from 47% to 38% of campaigns across this period. Marketing is missing a trick.

Peter Field is an independent marketing consultant and wrote The Long and the Short of It with Les Binet.

Recommended