I’m on a mission to rid our industry of bad research. I’m fed up with it. It leads to poor decision making, and increases the disconnect between marketers and consumers. In short, it’s damaging to brands and the industry we represent.
Over the past five to 10 years, conducting research has never been so affordable and accessible. On the surface, you might consider this a good thing. The increased number of insights and easy access to the consumer should be a marketers dream. After all, we like to position ourselves as the experts in the understanding of people. Also, let’s not forget that we’re all under enormous pressure to use data and insight to underpin strategic decision making – having ready-made insights at our fingertips is incredibly useful.
Increased accessibility means we’ve also reached a point where almost anyone can run research. Thought leadership studies promising ‘ground-breaking insights’ are being published to the industry so regularly, it feels utterly relentless. The problem is that quantity is not a sign of quality. Most of these studies are conducted without the necessary research expertise, are mostly agenda-driven, and often seek to reinforce the current orthodoxy of thought within the industry at any given time.
The worst part? Marketers seem to lap it up. I’ve lost count of the amount of times I’ve seen marketers (across all levels) quoting from research I know to be nonsense. I get that we’re all busy. Most marketers don’t have the time or research expertise to interrogate the intricacies of what’s being put in front of them. Equally, we are often slaves to our own biases. If research supports our point of view, we have little appetite to challenge it. If it doesn’t, we can easily dismiss it without the proper interrogation.
Almost all industry research comes from businesses pushing an agenda.
But this doesn’t let us off the hook. Marketers can’t absolve themselves of the role they’re playing in the normalisation of poor research. We should all be very concerned by the lack of critical thinking. If we continue to take everything at face value, companies will continue to churn out nonsense and the quality of decision making will spiral downwards.
It’s not all doom and gloom though. If we acknowledge that utilising good research leads to better outcomes, it’s in our interest to make sure we can identify the good from the bad. While the following isn’t intended to be an in-depth course on research methodology (let’s be honest, you’d probably stop reading if it was), I’d like to propose a simple guide to identifying poor quality research.
1. Ask yourself who has published the study and why
Almost all industry research comes from businesses pushing an agenda – this can be consultancies, agencies, media owners and brands themselves. Knowing who is behind a study should always be the starting point. Ask yourself why every year the PR giant Edelman publishes a trust barometer decrying the decline in trust amongst big business, or why Snapchat tells us that gen Z are unlike any generation before us. This is the foundation on which you can start to interrogate research further.
While this is important, it doesn’t always mean that the research isn’t robust and the insights aren’t valid. There are some examples, such as Thinkbox, which have an obviously clear agenda but their research always stands up to scrutiny. The message here is that you need to be conscious of the agenda, but don’t dismiss anything until you’ve dug into the details.
2. Determine the extent of social desirability
When I give talks, I often ask how many people in the audience agree with the statement ‘I am more likely to buy brands who do good for society’. Everyone raises their hand. I then ask if anyone has bought a product from Amazon in the past week – everyone raises their hand again.
This is an example of social desirability bias. Even when someone participates in an online survey, they will have a tendency to answer questions in a way that will be viewed favourably by others. This is often over-reporting what would be deemed as good behaviour.
Given that much of marketing is focused on influencing the choices people make, it’s unsurprising that most of the research we see is focused specifically on consumer behaviour. In this context we need to be extremely wary of social desirability, particularly when it comes to motivations for buying products. I would go so far as to say that social desirability is underpinning the social purpose marketing agenda – but, in the interests of time, I’ll save that for another column.
While we can’t do much about the agenda of the companies publishing the research, we can look at the results through the lens of social desirability. Ask yourself if the questions are framed to give a socially desirable response. If someone is presenting the research to you, ask how they’ve mitigated for social desirability (it can be done).
3. Back of the ‘net’
Published research usually has many functions, but often its first requirement is to get industry media coverage. In order to achieve this, it usually has to hit a number criteria:
- It has a large sample size
- It taps into the prevailing orthodoxy of the industry (let’s keep the theme going and use social-purpose marketing strategies as an example)
- It has some juicy stats
As such, studies will be designed to ensure that some of the findings have high percentages; for example, 90% of gen Z agree that ‘authenticity’ is the number-one trait they want from a brand. This also taps into our obsession for making generalisations about entire cohorts of the population, but I’ll save that for another day too.
In order to achieve said high percentages, you create statement questions (all the better if they’re socially desirable) where respondents are asked on a scale the extent to which they agree or disagree with a statement. The agreement percentages are derived from combining those who select the slightly agree/strongly agree options. Sometimes it’s a 10 point scale and the 8-10 is netted.
In all my years of research, the strongly agree (ie where there is a strength of feeling) is almost always far outweighed by those who selected the slightly agree option. Essentially, the real finding is that a small proportion of people are in strong agreement, the majority are mostly ambivalent. People publishing the research will know this, but they will try their best to pull the wool over your eyes. I’ve seen findings where 6-10 on a 10 point scale has been netted as agreement with a statement – if you ever see this, throw it in the bin immediately.
My advice is always to ask for the percentage breakdown of ‘slightly agree’ and ‘strongly agree’ (or by 1-10 on a 10 point scale), so you can get a real sense of the strength of feeling about a particular statement. It will almost certainly give you a different perspective on the findings, and will help identify those who are pushing a particular agenda.
So there you have it, a simple guide to help you identify bad research. Tune in next week for complex sampling techniques (only joking).
Andrew Tenzer is director of market insight and brand strategy at Reach. He is on Twitter at @thetenzer.