When Bill Murray meets people who tell him they want to be rich and famous the actor always dispenses the same advice. “I tell them to try being rich first,” he told The Guardian in 2003, “and see if that doesn’t cover most of it.”
Murray’s point, of which he is more aware than most, is that fame brings its own challenges. And it’s a realisation that must also have struck home for the ‘godfathers of effectiveness’, Les Binet and Peter Field.
It would be unfair to portray these two ad men as anonymous in the 1990s and 2000s. If you were involved in advertising and marketing back then, both were well-respected experts. But over the last two years their work, which has been chugging along in one form or another for more than two decades, is suddenly garnering global attention and – even more impressively – being actively cited and used by hundreds of marketers.
I should declare an early interest. I was one of those people that read their work back in the ‘old days’ and am now one of their staunchest fans. It’s rare I get through a client meeting or a class without at least one quick side journey into their treasure trove of charts and models.
At the centre of their contribution, at least as I see it, are three distinct but connected observations. First, that there are the two undulating lines of growth representing short-term, performance marketing and longer-term, incremental brand building.
The fact that these two different paths to growth are not mutually exclusive and that they operate with very different dynamic implications is at the heart of the ‘long and short’ approach to marketing planning. Indeed, the most famous element of Binet and Field’s work is the general prescription of a 60:40 ratio of long- and short-term investments for maximum effectiveness.
This big messy world of advertising, with all its varied and contradictory inputs, does not easily correlate with the equally untidy world of corporate performance and marketing effectiveness.
The second observation is that marketing is simultaneously becoming less effective and more short-term in its focus. Look at the two undulating lines of the long and short from within a six-month window and shorter, performance-based marketing always looks like it delivers a much better return. It’s only when you take in a longer multi-year perspective that the fallacy of ignoring brand becomes apparent.
Finally, and with least exploration and application so far, is the implication of Binet and Field’s work for targeting. In recent years the impact of the Ehrenberg Bass Institute and its promotion of “sophisticated mass marketing” has broken one of our discipline’s most cherished principles – that you must segment and target in order to have the greatest marketing impact.
It’s becoming increasingly common to encounter senior marketers who happily admit to targeting “everyone in the category” or develop advanced marketing campaigns that openly attempt to reach every single consumer on the planet.
For many marketers that has been a tricky concept. The benefits of targeted marketing have been apparent for decades and yet the empirical power of Ehrenberg Bass is hard to resist. In that light, Binet and Field’s work provides a fascinating and attractive middle path.
Their work demonstrates that when you are adopting a long-term brand building path it pays to target the whole category and eschew any form of segmentation. But it also shows that when you are playing the shorter, performance game it makes more sense to target existing consumer segments to get the best return. Put more simply, you do not just want the long and the short of it, you also want some mass and some targeted marketing within that approach.
But as awareness of Binet and Field and their work has grown, so too has the sniping and counter-argument. In any discipline the rise of prevailing theories should lead to a series of countervailing concepts that attempt to disprove or qualify them. There is – it would seem – an almost perfect correlation between how much marketing thinking is venerated and how much it is immediately undermined. In the case of Binet and Field the work has been challenged on three fronts:
This is the least credible of the three critiques and will therefore occupy us the least, but there have been suggestions over the past two years that the work of Binet and Field, and indeed the operation of the Institute of Practitioners in Advertising (IPA) that sponsors much of the author’s work, has an inherent bias in favour of television.
The criticism appears to spring from two places. First, the IPA has accepted sponsorship money from Thinkbox, the British organisation that represents TV broadcasters in the UK. But the criticism ignores the raft of other sponsors, starting with Google, Facebook, Radiocentre and Newsworks, that have also worked directly and indirectly to fund both the IPA and Binet and Field in the past.
“We only accept sponsors who give us complete freedom,” Binet recently explained. “Our recent work has been jointly sponsored by Google and Thinkbox. There were findings that were uncomfortable for each but we published regardless.”
There appears to be an unsavoury connection being made between the general findings of Binet and Field’s work and TV industry funding. It’s true that over the past decade the authors’ staunch defence of TV and its effectiveness has stood out in a marketing discipline intent on portraying the death of TV as an advertising medium at every possible moment.
But the contrast between Binet and Field’s positive view of TV and the industry’s negative one is not a function of trade funding or bias, but rather an empirical versus non-empirical perspective. TV remains a fabulously effective medium, not because the TV industry is paying for it to be said, but because the effectiveness data supports that fact.
2. The ‘winners’ circle’ sample
Another recurring concern with Binet and Field’s work stems not from their funding but their sample. Since 1980 the IPA has run its Effectiveness Awards and that growing database has been the basis for all of Binet and Field’s results.
There is a recurring argument that this subset of campaigns fails the test of representation when compared against the total set of marketing endeavours. Specifically, the 1,000+ cases that have been submitted for an IPA Effectiveness Award are flawed on three levels.
First, they are all British and therefore tarred with the same dirty brush of being from just one (very peculiar) market. Second, they are more likely to be big campaigns because the IPA draws a disproportionate amount of attention from the big brand/big agency crowd and not from the smaller side of town. Finally, and most challengingly, these campaigns were already part of the winners’ circle – or at least their submitters thought they were. Why else submit them for an effectiveness award?
To be fair to Binet and Field they do not just look at the winners of Effectiveness Awards. By looking at all submissions for an award the authors have never suggested they are looking at a completely representative sample of all campaigns but that by looking at what does, and does not, generate effectiveness, there is the ability to see what enables campaigns to move from “good to great”.
The analogy Binet uses is football. He acknowledges his work only looks at the professional game but that in comparing the wide variety of performance at that level, any player could learn more about how to improve their own game.
Given the data set, this is a limitation that is impossible to avoid. It certainly makes some of the claims from Binet and Field’s work less applicable – for example the typical campaign duration might be shortening but with the caveat that this is only among IPA submitted campaigns. But when it comes to excellence and what makes for superior impact, is basing the insight on only the big and the best a confounding factor? Limiting, yes. Confounding, no.
3. Self-reported effects
One of the more recent concerns about Field and Binet’s output is that the source of much of their effectiveness reporting comes from those submitting the case for review. Once a case has been submitted to the IPA the submitter is sent a confidential questionnaire that asks them to assess the scale of the effect of their campaign across sales, market share and a total of seven different effectiveness metrics. These results are kept confidential but used by Field and Binet to assess effectiveness, given output metrics are usually disguised or hidden in public cases.
Quite correctly, several critics have questioned the validity of research that uses these self-reported effects as a basis for overall effectiveness. Ideally the actual sales surge or market share increase would be reported and correlated to. But this kind of openness and cross-comparison is all but impossible to get hold of on such a large scale. Does that make the use of self-reported effectiveness a weakness of the research?
It might, were it not for the fact that Binet and Field have repeatedly shown that where they do have external effectiveness data, there is a strong correlation between these effects and the self-reported large effects that they use for their usual analysis.
For example, those submissions that reported at least one very large business effect demonstrated almost three times the market share growth of those submissions that reported no large effects. Similarly, those submissions that self-reported more very large business effects were also more likely to enjoy significant ‘excess share of voice’ (ESOV) superiority – another commonly used external measure of campaign effectiveness – than those that did not.
In other words, asking marketers to report the size and scale of their effects is a limitation but it does appear a suitable proxy for actual effectiveness, and an efficient way to get round the issue of asking hundreds of companies to report incredibly sensitive business metrics each year.
Much of this debate centres on the imperfection of all data in proving marketing theory.
In an ideal world we would have this data, but the world is not ideal and we’ve known ever since Schrödinger opened that box that empirical research must make epistemological bargains with reality. The question isn’t whether bargains have been made but whether they invalidate the work.
In the case of Binet and Field the bargains are there for all to see. Clearly the work is based on a small subset of total marketing campaigns and depends, for much of its insight, on self-declared reporting from marketers who have a vested interest in making everything as impressive as possible. But even with these sizeable caveats the work transcends these limitations, from my perspective.
That will not and should not prevent others from attempting to prove or disprove the findings. Indeed, many of us are already exploring (with much smaller data sets) whether some of the key contentions are accurate.
General criticisms of Binet and Field are likely to increase as their fame and influence grows. Much of that criticism is warranted and is an essential part of the disciplinary maturity of marketing. In truth, much of this debate centres on the imperfection of all data in proving marketing theory. We do not study rocks or gravity or the rotation of the earth.
This big messy world of advertising, with all its varied and contradictory inputs, does not easily correlate with the equally untidy world of corporate performance and marketing effectiveness. In fact, you would struggle to find a more cat-like bunch of statistics to wrangle. Developing any knowledge from this changing, reflexive mess deserves enormous effort and expertise and, for all their limitations, I thank Binet and Field for making some sense of it all.