We’re living in testing times. Or at least we should be. The speed and flexibility of interactive marketing means this should be a golden age of direct response testing. However, time after time I come across marketers who are averse to its very mention.
This is puzzling. Thanks in no small part to its advocacy by IPA chairman Rory Sutherland, behavioural economics is all the rage. This discipline puts “choice architecture” centre stage. Live market testing is the most definitive way to determine the optimal framing of propositions designed to change consumer behaviour.
Once the preserve of direct response marketers with limited small-talk and an obsessive interest in locomotive numbers, it seems it’s time for testing to enjoy some broader acceptance among all practitioners of the persuasive arts.
Yet still the resistance remains, even among the direct marketing community. I once heard a creative director claim (with no scientific justification that I ever saw) that 95% of the direct response tests conducted in the UK are statistically invalid. He pointed to the fact that the technique was invented in the US, whose vast population made sample sizes (along with the incidence of mass murder) proportionately greater.
It’s undoubtedly true that the vast multiple test matrices beloved of the Publishers Clearing House don’t survive translation to this side of the Atlantic. But no such objection should stand in the way of simple, sequential tests with fewer cells.
It shames me to admit it, but we in the agency world have done testing few favours. Too often, we have used the phrase “let’s test it” as a cover for indecision when faced with the task of recommending a clear course of action to our clients. This is worse than the parable about research involving drunks and lamp-posts.
The catalogue of agency sins does not end there. We have erected commercial barriers to testing by charging almost obscenely high prices for multiple versions of the same ad. This greedy practice stops agencies learning. In doing so, it ultimately erodes our utility to clients.
Kelly Johnson, the visionary aircraft engineer who established Lockheed’s revered Skunk Works, famously promulgated 14 management rules for his team to live by. Rule nine states/ “The contractor must be delegated the authority to test his final product in flight. If he doesn’t, he rapidly loses his competency to design other vehicles.” The same could be said of agencies and the flight-testing of their final product.
Even without predatory agency fees, testing can cost money – at least in the short term. There are additional production, media and analysis costs to factor in. Most importantly of all, there is the risk implicit in sacrificing a known rate of return for an unknown one in part of your plan.
Earlier this year, I spent more time than was entirely healthy for me immersing myself in the world of the professional fund manager. I learned many new and (to me) impressive buzzwords and business concepts. One of these was the notion of “alpha” and “beta” investments.
Beta investments are those meat and potatoes bonds and equities that managers rely on to deliver predictable but unexceptional performance. Alpha investments are the R&D lab of the fund management world. They carry higher risk, but they may reward you with spectacular performance.
Now the thing is, even the dullest fund managers need to have a bit of alpha in their lives as a hedge against history. Because as time goes by and the rules of the market change, today’s risky alpha could emerge as tomorrow’s breadwinning beta. It was the first time I’d heard gambling expressed as a risk reduction strategy, but the logic is compelling.
However, by far the best case for testing I’ve come across recently is to be found between the covers of John Kay’s exquisite and rightly praised book Obliquity.
His central thesis is that business goals are frequently best achieved through indirect methods: that successful outcomes flow more often from trial and error discoveries by prepared minds than from detailed and meticulous planning. (Of course, by the time the case study comes to be written, the story is usually one of a perfect masterplan immaculately executed, but that’s human nature for you.)
Obliquity is full of illustrations of the brute power of trial and error, but Kay’s best example is probably evolution. Diversity means that multiple possible biological answers to the problem of survival emerge. They are tested, and those best suited to the prevailing environment flourish.
Such a celebration of strategy by force majeure is humbling and not a little challenging for any of us with “planner” in our job title. But we should remember that his thesis underlines the importance of clearly defining the desired end-state (something that military planners would seem to have forgotten of late).
Above all, Kay’s work reminds us that there is usually more than one solution to any given problem, and that the first one we think of isn’t necessarily the best.
That’s one proposition I don’t think needs testing.
Richard Madden is planning director at Kitcatt Nohr Alexander Shaw