The only number you need to know does not add up to much

Fred Reichheld’s 2003 article on the Net Promoter Score may have been adopted by large companies, but it really is not much use to marketers

As storms in tea-cups go, this one has been a right humdinger. Back in 2003, the great loyalty guru Fred Reichheld published an article in Harvard Business Review rubbishing customer satisfaction research and announcing its replacement: “the one number you need to know”: the Net Promoter Score (NPS).

Ask customers whether they would be willing to recommend you and calibrate their answers on a scale of one to ten, defining nines and tens as promoters and sixes and below as detractors. Find the difference between the two and you have your NPS score. Simple and valuable, claimed Reichheld: the best ever predictor of future profits growth.

Two weeks ago, at a Marketing Science Institute ceremony in Austin Texas, head of consulting at Ipsos loyalty Tim Keiningham (left) received an award for an article written with academic colleagues demolishing Reichheld’s arguments. Here’s the gist of this article (and others in a similar vein).

Reichheld produced charts showing a correlation between NPS and company growth. Yet if you substitute good old customer satisfaction numbers (as generated by the American Customer Satisfaction Index) for this data, you get equally good results: satisfaction is as good a predictor as NPS.

If you test other indicators such as share of wallet or customer retention for predictive value, what you discover is that each metric is really good at predicting its own future, but not much else. Past retention is a really good predictor of future retention, and past willingness to recommend is an excellent predictor of future willingness to recommend.

In other words, “the one number you need to know” claim is a load of baloney. Multivariate indicators always work better than any one single number. That hasn’t stopped the NPS bandwagon. It has been adopted by companies such as GE as the holy grail of marketing metrics and a driver of corporate priorities and actions. Five years after this bandwagon started rolling, what has experience taught us?

Four lessons

First, there are some obvious traps to NPS, such as believing that customers who say “yes” to the “would you recommend?” question in a questionnaire have actually become active advocates of the company. One study in financial services and telecoms found that only one in three customers who said “yes” ever made a recommendation, and only 13% of these referrals were acted upon. So NPS tells you little, if anything, about actual customer behaviour.

Second, NPS simply repeats many of the debates triggered by satisfaction research, only in a new form. In satisfaction research, there was a massive hoo-ha about the difference between being merely “satisfied” (arguably a pretty passive state) as opposed to “very satisfied” or “delighted”. Big differences in behaviour between these two groups were often found.

NPS tackles this head on: willingness to recommend is a much more active intention. But at the same time it recreates the same issues elsewhere. As Justin Kirby and Alain Samson point out in a recent article for Admap, there may be a big difference between an NPS of 40 derived from 70 minus 30 (70% promoters and 30% detractors) and one of 40 minus zero. Also, a zero score may mean you have a brand terrorist on your hands, so you should probably be a lot more worried by lots of zero score than six scores. Yet, under the methodology, they are both classed as exactly the same: detractors.

Third, there are some real practical difficulties with the methodology, such as the way scores tend to jump about wildly month by month. This volatility seems to be driven as much by market conditions – e.g. competitor initiatives – as by customers’ actual experiences of dealing with the company, so you have to treat the actual numbers with a pinch of salt.

Fourth, in its early days, one of the big selling points for NPS was that, because it was “the one number you need to know”, you could use it to do away with other metrics (such as satisfaction) thereby saving money as well as focusing on what’s really important. But market research agencies fearing a sudden catastrophic loss of business needn’t have worried. Very few companies have dared drop their other metrics. They have simply added NPS-style questions to their existing research batteries, with many reporting useful correlations between NPS scores and other scores, including satisfaction. In other words, Keiningham’s point about multivariate data has been vindicated.

But the real issue is what you do with the data. Both satisfaction and NPS scores share a drawback. They tell you how well you have done after the event but they don’t tell you why you did well or badly, or how to improve your performance. In fact satisfaction research is more useful than NPS in this respect, because with satisfaction you can ask customers whether they were satisfied with different elements of the offer – the call centre, the sales assistant, the fulfilment process etc. But you can’t derive NPS scores for these individual elements. That’s why, as Kirby and Samson report, many companies find the open ended, qualitative “why” questions they connect to NPS much more useful than NPS itself.

Here we get to the nub of it: it’s not so much what measures you use, but how well you use them. NPS seems to be a useful tool in the hands of wily marketing managers who have used it to grab chief executives’ attention, focus this attention on what matters to customers, and bang heads together across the company to do something about it.

At T-Mobile for example, consumer business director Philip Barden created a cross-functional NPS forum to “drive actions on what’s really pissing customers off”.

“It has really sharpened our focus,” he says. “It’s made us dig into the reasons why people become detractors and made us get our house in order as well as we can”.

Spreading the idea of customer focus beyond the marketing department has been the biggest benefit of NPS, says Barden. Along with other initiatives such as reporting NPS figures at board level, “it has helped a culture change” with results coming through in actual performance, he claims.

If truth be told however, customer satisfaction could – and did – have similar galvanising effects in its early days. So yes, the metrics we adopt matter but beware that clich頡bout “what gets measured gets managed”. In the end, it’s not metrics that drive managers. It’s managers who drive metrics, and we forget this at our peril.

Alan Mitchell, www.alanmitchell.biz

Latest from Marketing Week

NOT REGISTERED? IT'S FREE, QUICK AND EASY!

Access Marketing Week’s wealth of insight, analysis and opinion that will help you do your job better.

Register and receive the best content from the only UK title 100% dedicated to serving marketers' needs.

We’ll ask you just a few questions about what you do and where you work. The more we know about our visitors, the better and more relevant content we can provide for them. And, yes, knowing our audience better helps us find commercial partners too. Don't worry, we won't share your information with other parties, unless you give us permission to do so.

Register now

THE BEST CONTENT

Our award winning editorial team (PPA Digital Brand of the Year) ask the big questions about the biggest issues on everything from strategy through to execution to help you navigate the fast moving modern marketing landscape.

THE BIGGEST ISSUES

From the opportunities and challenges of emerging technology to the need for greater effectiveness, from the challenge of measurement to building a marketing team fit for the future, we are your guide.

PERSONAL AND PROFESSIONAL DEVELOPMENT

Information, inspiration and advice from the marketing world and beyond that will help you develop as a marketer and as a leader.

Having problems?

Contact us on +44 (0)20 7292 3711 or email subscriptions@marketingweek.com

If you are looking for our Jobs site, please click here