In search of the killer question

How many questions should an effective questionnaire have? Just the one, which pinpoints the most necessary bit of information, or a range to give the best perspective? Alicia Clegg reports

Roger Ivey, director of the Customer Partnership, prides himself on being a bit of a hatchet man – at least when it comes to market research. Dismissive of the “if in doubt keep it in” school of thought, he claims that most market research surveys can be whittled down to just a couple of “killer questions” that tell the client business everything it needs to know. As an example, he cites a project that he worked on with a supermarket that wanted to gauge how much holders of its loyalty card were potentially worth to the store. “There were actually only two questions that you needed to ask customers: how many children they had at home and how far they lived from the store.”

Ivey isn’t the only marketer in favour of keeping things simple. Still more draconian in their methods are disciples of Bain & Company emeritus director, Fred Reichheld, who believe that there is, in fact, just one question that companies need to ask their customers: “How likely is it that you would recommend us to a friend or colleague?”

The need for questions
On the face of it, market research agencies should be preparing for a period of belt-tightening, if the enthusiasm for lean surveys takes root. But killer questions aren’t quite the research terminators that they are made out to be. Take first Reichheld’s killer question. Reichheld claims, based on a relationship that he has proved statistically, that a company’s growth prospects are predicted by its “net promoter score”. The latter is a single figure that can be worked out by subtracting the percentage of customers (detractors) who are unhappy with the business’s performance from the percentage of customers (promoters) who would recommend it. Good though the metric may be as an overall indicator of corporate health, by itself, it says nothing about why customers are unhappy, still less what the company could do to make them happier.

Ivey’s slimmed down questionnaire turns out, on closer inspection, to be even less of a complete cure-all for businesses sated with overstuffed surveys. To begin with, what counts as a killer question in one market sector, say supermarkets, is unlikely to be of much use in another sector such as financial services. Secondly, killer questions don’t magically present themselves to researchers. They are arrived at through an arduous process of preliminary research. In the normal course of things, this involves asking a test sample of customers lots of questions, then analysing the results statistically to sift the wheat from the chaff.

The cheapest questions
Doing the heavy lifting upfront, so that the final phase in market research can be made as painless as possible has a lot to recommend it. Firstly, asking a small number of people a lot of questions and a larger number just a few questions is likely to be considerably cheaper than asking everyone everything. Another factor to be weighed in the balance, says Juliet Strachan, a senior partner at HPI Research, is the danger of trying the public’s patience too far by exposing people to surveys that are ill-focused or unreasonably time-consuming. “One respondent turned off by an over-long, boring or repetitive survey is not going to respond to the next one.”

The start-point for developing a survey that is only as long as it needs to be lies in understanding what the client wants to learn and how what they learn will be used by the business. It is for this reason that good researchers devote almost as much time to talking to their clients as they do to talking to consumers. But creating a survey that works isn’t simply a matter of knowing the questions that the client wants answered and then asking them.

Nick Coates, a senior consultant at FreshMinds, makes the point that the terminology of surveys often reflects how businesses conceptualise markets rather than how customers think about things. When this happens, it is sign that the agency has failed to pull off one of the key challenges of research/ translating the client’s mental models into ideas with which customers identify and then turning what customers say into analysis that the client can put to practical use. “It’s almost a linguistic issue. The killer question is one that can be re-integrated into the framework within which the business thinks.”

Added Value quantitative project director Caroline Whelan, shares Coates’ view of the researcher as a “translator” who inhabits two worlds. As an example, she cites some work her agency recently undertook for a fragrance manufacturer. “When fragrance businesses think about their brands they concentrate on the brand’s equity. Is it whacky, sophisticated or brash? When consumers think about fragrances they don’t think about brands in isolation, they think about how they want to feel when they are wearing fragrances on particular occasions.” ©By making the mental leap from thinking about the brand’s self-image, to how wearers wanted to feel in different contexts – such as a night out with a partner or an important business meeting – Added Value was able to formulate a set of questions that supported its clients’ product development strategy. “We were able to say: ‘this is a feeling that is wanted a lot, on such and such an occasion, so it’s potentially a big opportunity for a new fragrance; while this is more of a niche opportunity’.

The point of research
One issue with which many researchers struggle, is the extent to which quantitative research is about finding things out, as opposed to attaching numbers to phenomena that have already been demonstrated. For Whelan: “Quantitative is always about having a hypothesis and then measuring it.”

The downside of being too prescriptive is that a hypothesis could turn out to be wrong. In which case, a lot of time, effort and money will have been spent asking questions that move the client no further forward. Thus for Strachan: “There needs to be some space for manoeuvre that allows the researcher to explore beyond the brief’s strict objectives.” But what does that mean in terms of tangible research practices?

First, says Strachan – drawing upon a recent case of a services firm that had misinterpreted a particular pattern of behaviour recorded on its database – it is important to allow research participants to “self-classify,” that is to describe their actions in their own words, rather than forcing them to fall into categories defined in the research brief. “What the business thinks they are observing is not necessarily what the consumer is conscious of doing.”

Including questions that solicit comment, as well as closed questions, can also be useful, says Kadence UK research head Kieron Mathews, even when the free-form responses are not analysed statistically. “Open questions allow the researcher to understand the themes and sentiments behind the data.” Last, but not least, recommends Strachan, allow time for exploring the unexpected. “Being able to target and cherry-pick people who gave unexpected responses is extremely valuable. If you are looking for a killer question, it’s asking people whether they are happy to be re-contacted.”

The push to reveal the story behind the raw numbers of quantitative surveys has its roots in the same client-driven trend that has spurred qualitative researchers to range widely in their quest to fathom consumer psychology. So might a more experimental approach to research methodologies enable quantitative researchers, in their turn, to deliver a bigger bang for their clients’ buck?

An emotional platform
One practitioner who clearly sees scope for innovation in quantitative methods is Ipsos MORI managing director Gill Aitchison. A spokeswoman for the Market Research Society, Aitchison makes the point that while direct questioning is good for topics that focus on functional performance, it does less well for subjects that invite an emotional response. “Asking a direct question tends to force a rational response. But, the reality is that much of life is lived on an emotional platform.”

To enable quantitative researchers to inquire into the subjective side of human experience, agencies that aspire to be more than data factories are borrowing from the armoury of qualitative research. One example of this is the integration of oblique and projective styles of questioning into quantitative work. “Rather than asking people what their perceptions are, it can be useful to ask them what their friends would say,” says Coates. “It’s a way of reducing the psychological pressure on people.” Another strategy, where budgets permit, is to employ quantitative and qualitative interviews in combination with each other.

A third, more radical, option is to dispense with language altogether and draw people out by employing other, more instinctual, forms of communication. “Rather than asking questions, it can be helpful to get people to express their feelings about a brand or a piece of advertising by choosing pictures that depict emotions,” says Aitchison. “Expressing yourself through a series of words requires you to go through a structured thought process. When you respond to a picture, a colour, or a shape, the mechanism is more subconscious.”

For much of human experience, people have been forced, for want of alternatives, to make decisions guided only by instinct. In the 21st century, the table has turned to such an extent that a superabundance of data threatens to drown out people’s capacity for intuitive thinking. The need now isn’t for one-off “killer questions”, but for analytical methods that allow imagination to flourish.