LSE - Small Logo
LSE - Small Logo

LSE BPP

May 23rd, 2019

Why reading polls is actually a lot more complicated than it looks

0 comments | 1 shares

Estimated reading time: 5 minutes

LSE BPP

May 23rd, 2019

Why reading polls is actually a lot more complicated than it looks

0 comments | 1 shares

Estimated reading time: 5 minutes

When the results of the European Parliament elections are revealed they will likely be met with the usual assortment of self-congratulation and outcry. Drawing on the work of the LSE Electoral Psychology Observatory, Michael Bruter and Sarah Harrison explain why polling is a lot more complicated than people may think.

Reading polls is a lot more complicated than what many assume and often, we blame polls for our own misunderstanding of what they can and cannot tell us and of the nature of citizens’ psychology. As British people vote for the European Parliament Elections, here are seven points that might be of interest to those who care about polls and elections.

Over 90% of our political thinking and behaviour is actually subconscious

You can ask someone what they think or what they will do, but the truth is that they do not know it themselves. So, whilst a person’s answer to the question ‘who will you vote for on Thursday’ is useful and consequential, it is not so in a ‘what they say is what they will do’ sort of way. This impact of the subconscious has been well studied, and scholars are aware of it, but sometimes we do not fully realise its impact of the relationship between what we ask and what people can answer.

Moreover, the consequences of the subconscious nature of political behaviour can be harder to evaluate in the age of de-aligned politics. Whilst everyone realises that party identification has declined, the prevalence of ‘old’ models of identification means that we implicitly assume that such identification is the sole source of consistency in electoral behaviour, and that therefore, electoral unfaithfulness or populism are incoherent or anomalous.

The truth is that there can be many other sources of consistency in electoral behaviour which are not based on constant party choices. One may choose to vote for Labour when the Tories are in power for exactly the same reason they chose to vote Tory when Labour was in power. This can be perfectly coherent and predictable, just not in a simplistic partisan framework, but instead based on our understanding of how different voters perceive the function of elections and their own role as voters – what we call their electoral identity.

(Most) people vote on Election Day for a reason

Our research suggests that between 20-30% of voters typically make up or change their minds within a week of an election, about half of them (10-15%) on Election Day. In low salience elections – such as European Parliament elections – this can be even more.

We find that the atmosphere of elections typically picks up radically in the final week and also that voters are far more sociotropic when they vote at a polling station than when they answer survey questions or vote from home. Usually, a lot of those late deciders and switchers cancel each other out, but sometimes this is not the case (e.g. second round of the French 2017 election) and individual conversions can then add up to a change in the election result.

Pollsters make assumptions about what they are getting wrong

Your favourite pollster does not just ask people who they will vote for and print the result. Poll results would look very different if they did. Instead, they use past experience and make assumptions on what they must change to correct what they believe is the gap between measured responses and actual picture.

For instance, if for the past three years, a survey company underestimated the country’s extreme right vote, they will assume that their current measure similarly undercounts those voters and they will “correct” the responses they get by using weightings (i.e. multiplying the declared vote for the extreme right by a small or sometimes not-so-small factor) to try and reach a more realistic figure. Those assumptions and corrections may make the results more accurate… or less.

Pollsters make assumptions about who will vote

Pollsters also make assumptions over turnout. What’s the point of taking that into account a respondent’s answer on who they will vote for if they stay home? When you expect a high turnout of say 80%, those assumptions are fairly robust, but the lower the expected turnout, the more fragile the guesswork is – European Parliament elections tend to have a low turnout so fall squarely within that category.

Often, survey companies will use questions such as ‘how sure are you to vote?’ to predict turnout, but our research suggests that those questions are not very efficient. So, many of the differences across polls are partly due to different pollsters making different assumptions on which respondent categories will or won’t vote, something many got wrong in 2017 about young people.

Furthermore, while analysing the EU membership referendum and the 2017 snap election, we pointed out that most polls and surveys do not use actual measures of turnout, because they count proportions of voters out of total respondents, whilst turnout is a proportion of voters out of registered voters. However, we know that those aged 20-25 are far less likely to be registered than any other category. So in a poll, when a 20-year-old tells you that they won’t vote, you count them as an abstentionist whilst they may just be unregistered.

Because of what we explained above about weightings (surveys chronically have too many people claiming to vote), mistaking unregistered youngsters for absentionists will then make you disproportionately overestimate the presence of abstentionists amongst young people.

Institutional designs matter

Electorates are not monolithic, and national trends do not count as much as what happens within each constituency. For instance, Labour was more affected than the Conservatives by the rise of the UKIP vote in the 2015 General Election notably in some Northern constituencies, and benefited more from its decline in 2017.

With the 2019 European Parliament elections, things are further complicated by the d’Hondt method which implies different calculations for strategic voters than under the plurality used in General Elections. Under plurality, it is quite easy to be strategic: avoid small parties. Under the d’Hondt method, things are far more complex. Often, voting for a strong list means a wasted vote whilst supporting a smaller list will give them a seat. Thus, the ‘remain voter’ website designed by data scientists has come up with suggestions for remainers to vote for the Lib Dems in Scotland and Wales, the Greens in the North West and the Midlands, and for Change UK in London and the South East. Even then, those predictions still rely on the polls so may be exactly right or not at all.

Polls do not just measure voting intentions, they shape them

One of the important concepts in our research is the concept of ‘societal projection’. Many voters think of the rest of the country and its behaviour as they cast their vote. Of course, opinion polls play a big part in shaping what we all believe others are doing. In other words, polls precede the vote and can thus affect it.

Thus, we believe that part of the explanation to the apparent ‘surprise’ of the 2015 General Election which saw the Tories win a straight majority after all polls seemed to predict a hung Parliament is that precisely due to those expectations, many voters cast their ballot thinking that they were effectively shaping the type of coalition that could lead the country rather than giving a party a majority.

In an era of transparency, it is actually good that all citizens have equal access to polling information rather than those being secretly collected by parties or governments. However, this creates an endogeneity problem: polls do not just measure something with no ensuing consequence; instead, they become one of the prime sources of information that voters use to make up their minds.

Should we just get rid of polls altogether?

Nobody would suggest that we should stop having chemistry experiments just because not everyone can just immediately interpret their result or self-teach how to be effective chemists on the internet. Survey research is the same: a complex body of knowledge which design, analysis, and interpretation require complex knowledge, and literacy. It also requires sufficient introspection to know the limits and conditions of the exercise.

At the Electoral Psychology Observatory, when we thought that Leave and Trump may well win at the times most polls seemed to suggest Remain and Clinton victories, it is because we asked ourselves questions like’“are the people who should vote Clinton say that they will?’ and ‘are claimed leave and remain supporters equally enthusiastic or do they say that they will choose the lesser of two evils?’. The picture that emerged was very different from the misleading headline figure most media focused on.

Voters not doing what they said they intended to does not make polls wrong. Instead, we are blaming polls for our own collective lack of understanding of how we should read them, and of the psychology of voters, human beings who react to all information including polls, and who can and will change their minds till the last minute especially when they feel the responsibility of their role as a voter on their shoulders. Indeed, anyone who has ever been to a polling station knows that this is not a meaningless moment. There are a whole range of thoughts, memories, and emotions that come to our mind as we stand in the polling booth which we could not expect until we are reminded of the role that we inhabit as a voter. That we should behave differently in those circumstances than when we answered a random interviewer about our voting intention does not make the interviewer wrong, instead it proves the whole logic of electoral democracy right.

________________

Note: you can follow the work of the Electoral Psychology Observatory here.

About the Authors

Michael Bruter is Professor of Political Science and European Politics in the Department of Government at the LSE, and Director of the Electoral Psychology Observatory.

 

 

Sarah Harrison is Assistant Professorial Research Fellow in Political Science in the Department of Government at the LSE, and Deputy Director of the Electoral Psychology Observatory.

 

 

 

All articles posted on this blog give the views of the author(s), and not the position of LSE British Politics and Policy, nor of the London School of Economics and Political Science. Featured image credit: Pixabay (Public Domain).

Print Friendly, PDF & Email

About the author

LSE BPP

Posted In: British and Irish Politics and Policy | Featured | LSE Comment | Party politics and elections

Leave a Reply

Your email address will not be published. Required fields are marked *

Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported
This work by British Politics and Policy at LSE is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported.