When it comes to marketing metrics, David Reibstein knows more than most. The Wharton marketing professor is an author of the best-selling “Marketing Metrics: 50 Metrics Every Executive Should Master” and recently turned his attention to which marketing metrics work, and which don’t.
He investigated metrics relating to the marketing mix, marketing strategy, market orientation, marketing actions and marketing processes; relating to customer satisfaction, customers generally and brands. He covered many industries, including pharmaceuticals, packaged goods and financial services. Reibstein looked at awareness, perceptions, attitudes, loyalty, frequency, value, customer lifetime value, customer acquisition costs, customer retention costs, research and development (R&D) spends, new products as a percentage of total revenue, time to market, price elasticities, percentage of sales on deal, media scheduling and placement, promotional uplifts, return on investment on promotions, website hits, stickiness and… well, you get the picture. His list of references runs to 28 pages.
“Several important findings emerge,” he tells us. “Namely, customer retention drives customer lifetime value, customer lifetime value drives share price value, and satisfaction drives customer retention and customer lifetime value. Also, new products drive share price value. In other words, if you innovate and treat customers well, they’ll come back for more, which will show through in your results.
Trouble is, most of us knew that already. We didn’t need a clever professor to conduct exhaustive research to tell us.
Reibstein continues with the question: “Where are we in terms of generating empirical generalisations about the numerical size of the various links?””Sadly, we are not very far along… more studies are needed,” he concludes. Really? Here’s a suggestion. No matter how many more studies we do, they will never prove anything more than what the professor has already proved. Why? Because, if your underlying assumptions are flawed, your results will also be flawed.
Reibstein’s problem lies with his model of marketing. Marketing productivity depends on a “value chain” of causes and effects, he argues. “In this chain, company actions, such as R&D, advertising spending and customer targeting, impact the mindset of customers, employees, partners and competitors. These impacts are linked to customer behaviour in the product market; the consequence of product-market outcomes is financial performance. The final outcome is stock price or market capitalisation.”
That all sounds very sensible but in reality it is about as scientific as a witch doctor correlating chicken sacrifices to harvest yields. Producing lots of data and making lots of correlations may look scientific, just as wearing a white coat makes you look like a doctor, but if the fundamentals of cause and effect are missing or misunderstood it’s a waste of time.
So where do models like this go wrong?
First, they assume that people always respond to the same action in the same way. If I kick a ball twice with the same force and in the same direction, then both times it will follow the same trajectory. However, if I kick a dog twice with the same force in the same direction, chances are that the first time it will yelp and the second time it will bite me. In physics, the same cause always has the same effect. In life, the same cause routinely generates different effects.
Apart from some crude attempts to model consumer reactions to how many times they see or hear a particular ad, the underlying assumption of most “effectiveness” models is that people do not learn: they assume the same marketing actions – a particular type of promotion, advertising or channel of communication – always has the same effects such as a particular uplift in sales. In other words, they assume that people are lumps of inert matter, not living beings.
Second, they assume that the same actions, for example “an ad campaign”, have the same effects regardless of whether they add or destroy value for consumers. The key measures of success used (sales, profit, margin, ROI and so on) relate solely to benefit to the company. Aside from the rather weak exception of customer satisfaction, they are blind to consumer value. The net result model is a bit like Einstein coming up with the equation e=[BLANK].
To see why this matters, try this as a diagnostic exercise. Take a key performance indicator such as sales or profits and estimate what proportion of it is driven by each of the following categories.
One, real value alignment, where what you are offering and how you bring it to market is so in line with what the customer wants that he doesn’t even bother looking elsewhere.
Two, prodding. Here, even though you have produced your product you find yourself adding a little extra to sway the consumer’s decision, such as a special discount.
Three, deception, where you take advantage of the consumer’s relative ignorance, or the consumer’s cost of finding the information he needs to make a better decision. Examples include misleading claims and labelling, confusing tariffs designed to cloud clear price comparisons, and so onFour, a no-man’s land between the two where the real reasons behind an “effect” remain murky. Take the retailer that thought consumers loved his store’s range, ambience and pricing and tried to replicate the formula elsewhere while overlooking the fact that many customers were frequenting his store because it happened to be the most convenient. Or the brand manager who thought everyone was loyal to his brand when they just happened to like a particular product – and then just happened to like a rival’s product even more. In both cases, what’s actually causing success is different to what the marketer thinks this cause is.
Move the needle
The problem is, narcissistic metrics such as sales, profits, margin and ROI give you no means of distinguishing between any of these categories. From the metric’s perspective all four of the above look exactly the same (and if your sole aim is to “move the metrics needle” guess which of the four categories tend to move the needle farthest and quickest?)Right now, marketing metrics are in a mess. Not because marketers are innumerate or resistant to accountability, but we are measuring only half the equation, which leaves us unable to divine any real connections between cause and effect. As long as “accountability” is measured solely by narcissistic metrics, we will never have the knowledge we really need to improve marketing performance.