There has been a good deal of wailing and gnashing of teeth following the supposed failure of the opinion polls to predict the result of the US Presidential election. It follows similar complaints after the Scottish independence referendum, the British general election and most other polls in recent years.
The Washington Post is typical of many observers in talking about “the polling disaster of 2016”.
“The disaster of the 2016 election forecasts is not dissimilar (to a plane crash) with a series of mistakes building upon one another to lead prognosticators astray. Pollsters now are sifting through the wreckage to find the black boxes and assess what went wrong in order to prevent it from happening again.”
It has even popped into popular parlance. You now hear ordinary people, on the proverbial Clapham omnibus, musing that the polls got it wrong – again. There’s a certain irony about observers’ rush to blame the polls for this state of affairs, in an election, itself so characterised by ‘blame culture’.
In any case, their angst is wholly misplaced.
I’ll just make two points about polling. First; for a host of reasons, they cannot be wholly accurate predictors and second, to expect them to do this is to misunderstand the nature of quantitative research.
Opinion polling cannot ever be a truly reliable predictor of behaviour. Firstly there are a bunch of reasons specific to the particular research design adopted. For example, in the last UK general election, the industry’s post mortem concluded that the Labour vote was overstated and the Tory vote understated because when you approach people to respond to the survey, Labour voters (ordinary working people and poorer types) are disproportionately likely to be available to answer your questions than Tories (busy senior managers and professionals) who are too busy. This is just one of many ways in which the best-laid plans of market research designers fall down on the intricate trivialities of daily life.
If we weren’t already sceptical about the accuracy of the polls then we should be alarmed by the fact that the “poll of polls” often shows results which differ by large margins even when the surveys were only a few days apart. If nothing else this reminds us that the findings are ‘directional’ and can’t be considered objective.
There’s a more fundamental issue here too. Contrary to appearances, quantitative research isn’t really about asking people questions. It’s about designing experiments. And this particular design falls down because of what particle physicists have called the Observer Effect. This means that the act of observing will influence the phenomenon being observed. For example, for us to “see” an electron, a photon must first interact with it, and this interaction will change the path of that electron. Tha same is true of opinion polls in general elections. Because poll results are published, seeing your party down in the polls may encourage you to turn out and vote. Seeing your party up may encourage you not to bother. Either way, the poll is likely to be undermined by its own findings.
This isn’t intended to be a damning indictment of opinion polls. Quite the reverse. They are mostly a good thing – if I were plotting a political campaign I would want to know if my messages were getting through.
But if we expect them to predict the result, we’re misguided. The nature of voting is simply too fluid and subject to too many influences. And in any case, expecting quantitative research to predict future events is never likely to be that accurate in this or any other area of study. When we use quantitative research to ask people how likely they are to buy a new product, we don’t expect that the numbers will literally represent the number of buyers. We take it as a guide. We compare it with other similar studies in the past and some other benchmarks to help guide our judgements. We don’t simply take the numbers and treat them as some kind of truth.
Nobody should have these expectations of ‘truth’. And the practitioners themselves have scored a massive own goal by allowing these unrealistic expectations to spread – well it must have appeared good for business I suppose. Until now. Now the pollsters have become one more group to blame for everything going wrong. Talk about shooting the messenger.