polling station

There were gasps of surprise around the country when the exit polls at the general election predicted an outright Conservative majority. Few were more surprised, however, than the companies that had been doing pre-election polling.

Almost all had been predicting a hung Parliament, with Labour and the Conservatives neck and neck with 34% of the vote. In the end, however, the Conservatives won– taking 37% of the vote to Labour’s 31%.

The errors prompted a review of the polling industry by the British Polling Council and the preliminary findings of that are now out. It found that “unrepresentative” samples were to blame.

ElectionResult

In particular the polls did not do a good enough job of asking those aged over 70 how they intended to vote and missed out on voters too busy to respond to polls but who turned out to be more likely to vote Conservative. Too much stock was also placed in the under-30s who said they planned to vote Labour but in reality did not turn up at the polling stations.

As part of his recommendations, Patrick Sturgess, professor of research at the University of Southampton and the head of the review, says: “Companies are going to have to be more imaginative and proactive in making contact with, and giving additional weight to, those respondents that they failed to reach in adequate numbers in 2015.”

What skewed the results?

A number of theories attempted to explain why the pollsters got it so wrong. These included a late swing to the Conservatives, Labour abstentions and the so-called ‘shy Tories’ who didn’t reveal their true polling intentions.

However the report found that none of these were the case. As part of the review, 4,328 people responded to a ‘British Social Attitudes’ (BSA) survey. This gave Conservatives a lead of 6.1 points over Labour, very close to the actual election result of a 6.6 point lead.

This suggests the problem was that the polls interviewed too many Labour supporters and not enough Conservatives and didn’t account for this in their weighting.

“A key lesson of the difficulties faced by the polls in the 2015 general election is that surveys not only need to ask the right questions but also the right people. The polls evidently came up short in that respect in 2015.”

Professor John Curtice, report author

Why was this the case? The BPC says it was mainly down to the fact that the BSA survey was conducted very differently to the pre-election polls by using random sampling and ensuring that as many people on that list were then questioned in a process that took four months.

The election pollsters, however, tended to only have short windows of a few days meaning that people who are most easily contacted – typically online or by phone – were most likely to respond.

What brands can learn

Curtice says the review shows that: “Random sampling, time-consuming and expensive though it may be, is more likely to produce a sample of people who are representative of Britain as a whole.

Using that approach is crucial for any survey that aims to provide an accurate picture of what the public thinks about the key social and political issues facing Britain and thus ensure we have a proper understanding of the climate of public opinion.”

Polling voters is a particular use case. As Martin Boon, director at market research agency ICM Unlimited points out, the fact that the polls failed to find enough people aged over 75 matters when trying to predict an election outcome but less so if it alters a brand awareness or NPS score.

Yet any poll that doesn’t use random sampling will be biased towards people that have the time and interest in answering the survey.

As Michael Simmonds, CEO at research and strategy consultancy Populus puts it: “Low-cost, quick-fire surveys such as those that were used to determine voting intention should not be relied on to inform big business decisions and shape strategic change.”

The cost of research

What marketers must do is weigh up the costs. Is the market research they wish to undertake worth the time and money spent on using random sampling or will a quick show of hands suffice?

“They have a very useful function in gauging immediate public reactions and attitudes towards news, events and campaigns but the findings should always be used within the context of a rich and confident understanding of a market, which will have been reached through a much broader, more long-term and altogether more sophisticated programme of research.”

Boon also recommends brand make sure they have understanding not only of what people are saying but what they think about the subject by using ‘implicit response testing’. He uses the example of people’s attitudes to Syrian refugees and whether people would be willing to give up a room of their own.

When questioned, fewer than half of those that said they would offer a room actually believed their own response.

“There’s a key point here in not only getting at what people think but how they feel and then what they go on to do. That should and must be central to how brands measures attitudes in the future,” he said.