What do you really think?

What a customer thinks, and what a market researcher is told, may not be the same thing. Companies have to ask the right people the right questions at the right time to get the best from their research, says Alicia Clegg

Think back to your last visit to the supermarket. How long did you have to queue for? Was it less than a minute; one to two minutes; or more than five minutes? And what about stock availability? Could you find everything that you wanted and, if you couldn’t, were the alternatives acceptable? Now (and think hard), if you were given the option, would you prefer to find all your first choices and queue for five minutes, or not queue at all but have to settle for more substitutions?

Deciding how to allocate a limited budget to best effect is absolutely fundamental to retailers. So the idea of presenting customers who have just shopped in a store with a range of scenarios and asking them, in effect, to tell you which aspect of service to invest in and which to ease up on has obvious appeal. But do such techniques really tell companies anything about the way in which customers, rather than businesses, make choices in the real world?

The question of how best to research customer service has been debated for years. It is a subject that exercises Professor Byron Sharp, director of the University of South Australia’s Marketing Science Centre and global director of the Research & Development Initiative (RDI), an industry-funded venture run in concert with South Bank University. In a short paper circulated earlier this year, Sharp argues that companies make investment decisions based on research that is often meaningless. The first problem, he suggests, is that consumers rarely evaluate service experiences in the logical way market researchers would have us believe.

It’s the thought that counts

To begin with, there is the assumption, often implicit in research, that people carry attitudes around in their head that determine their buying behaviour. “Most of the time, we don’t give much thought to the burger that we’ve eaten or even the flight that we’ve made,” says Sharp. But, he adds: “Our natural inclination is to be polite and co-operative. If a researcher asks us to give an opinion we will do our best to formulate one on the spot.”

Made to measure

It is not just consumers’ desire to oblige researchers that makes measuring service experience so hazardous. Other factors are at work. A classic example concerns tracking studies. If customer satisfaction is rising, most managers assume that this is down to their own efforts.

Yet according to Sharp, satisfaction measures “show little or no relationship to service delivery”. Instead, they reflect factors that more often than not fall wholly outside the control of the business. To take a couple of examples: when interest rates fall, borrowers feel generally more positive about their bank’s service, even though, as customers, they are being treated in exactly the same way as before. Similarly, when the press carries stories about executive fat cats, satisfaction ratings for indicators such as value for money take a turn for the worse.

The pitfalls highlighted by the RDI are issues that research agencies grapple with on a daily basis. Many have evolved techniques that compensate, at least in part, for the worst sources of bias. A case in point is the growing importance attached to looking at research in context.

To take a simple example: discovering that 80 per cent of customers are satisfied with staff friendliness, while only 70 per cent are satisfied with value for money, might lead a company to conclude that it is doing well on friendliness and not so well on perceived value.

No comparison, no results

However, the exact opposite might be the case, according to Research International group business practice director Roger Sant. He says: “To make sense of research findings, you need comparative data, not absolute scores.” In this instance, that would mean comparing the company’s performance to that of its competitors and knowing that the scores awarded for staff friendliness tend to be higher than those relating to value for money.

On a similar tack, Tim Wragg, global director of NOP World’s customer management centre of excellence, argues that any competent researcher should be able to identify and correct distorting factors, such as seasonality or interest rate movements. So if the remedies exist, why do the problems continue?

One explanation may simply be ignorance. “There is a still a good deal of managerial naivety about research,” acknowledges Wragg. Another factor, undoubtedly, is cost and pressure of time. Tracking customer satisfaction on an absolute scale is quick and easy to do – even if the results provide a less than reliable guide for service improvement. Indexing performance against competitors is more complicated and pre-supposes that companies can find people with relevant experience of competitor brands and are prepared to buy into a database containing normative data for the whole of the industry.

Even then, as Maritz Research head of analysis Jeremy Griffith points out, the solution is not perfect. “When you make comparisons between surveys there’s a danger of comparing apples with oranges – there is no guarantee that the questions asked of competitors’ customers exactly match your own.”

You do the maths

Some of the toughest research conundrums are being investigated mathematically with the aid of statistical tools such as econometrics.

Such techniques come into their own when a researcher wants to test a hypothesis. For instance, a company might have a hunch that a dip in satisfaction is seasonal, or that consumers are playing down the importance of a perk or freebie, such as free champagne in the flight lounge or the chance to enter a prize draw. “Spotting patterns in data can open your eyes to the influences on satisfaction consumers won’t freely admit to,” says Ian Brace, director of research at TNS and a spokesman for the Market Research Society.

Another way to probe the subtleties of consumer satisfaction is to undertake qualitative studies. The risk here, however, is of spending more time listening to people talk about brands and services that, in the normal run of things, they would consume without thinking. So what is the way forward?

Liaising with the enemy

One approach is to stop treating all respondents as equal and to focus on those who have most to say about your service. At one end of the spectrum, this means courting people companies have failed to impress – complaint-makers or those who have recently switched to a competitor. Another potentially fruitful line of attack, says Simon Roberts, founder of research-based strategic consultancy Ideas Bazaar, is to spend time with front-line staff. “There’s huge value in finding out about the processes that impede employees in their everyday work, because those are often the things that frustrate customers the most,” he says

At the other end of the spectrum, researchers are devoting extra attention to clients’ most committed customers – so-called evangelists – who buy the brand repeatedly and recommend it to their friends. The hope here is that by talking to advocates, companies will gain a better understanding of the emotional triggers that create brand preference.

“Historically, researchers have focused on the rational bread-and-butter aspects of service, such as whether a repair was done properly,” says Larry Crosby, chairman of customer loyalty research and consulting company Symmetrics. He adds: “What we under-estimated was the extent to which brand preference and loyalty are influenced by emotional responses.”

To illustrate his point, Crosby describes how his own attitude towards US phone giant AT&T was transformed by a directory assistance operator, who provided him with the number he wanted for a restaurant in an unfamiliar neighbourhood and then talked him through how to get there.

The silent majority

But what about the people who consume brands casually, who are neither committed nor disaffected, but who, when all is said and done, account for the vast majority of the consuming public? Even if, as the RDI’s work implies, mainstream consumers pay less attention to service experiences than was once assumed, it would seem risky to discount their views. Whenever we purchase something from a store, book a holiday or buy a coffee, we come away with a general impression of whether the service was better or worse than we expected. And – if we are caught quickly enough – we can probably say why.

One way to capture fleeting thoughts before they fade from our memories is to conduct interviews close to the point of purchase. Another line of attack is to encourage consumers to give feedback on service in whatever way suits them. Grass Roots, a company that specialises in “performance improvement” rather than classic market research, favours this approach. Executive board director Nigel Cover says: “It’s about opening a channel of communication. If you call a customer while they are watching EastEnders, offer to contact them later. Alternatively, give them the option of calling you and leaving their comments on an automated system.”

A question of trust

Marketers are increasingly aware of the limitations of market research, particularly the dangers of a relying upon a single source. But whatever the perils of trusting naively in consumer opinion, there exists one greater danger – and that is not trusting consumers’ opinions at all.