Share this:

charles-tien-80x108Michael Lewis-Beck 80x108For most commentators and pollsters, Donald Trump’s victory in the 2016 presidential election came as a sharp surprise. Charles Tien and Michael S. Lewis-Beck examine how political science modelers performed in their election predictions compared to poll aggregators and to the national polls. When looking at Hillary Clinton’s share of the two-party vote, they find that political science models were the most accurate in their forecasts. The national polls, by contrast, were largely outside of the margin of error in their predictions for both Clinton and Donald Trump’s share of the popular vote.

The 2016 US election forecasting field was mostly divided up between the political science modelers, pollsters and poll aggregators. Pollsters and poll aggregators use national and state-level vote intention polls to make their forecasts, and are continually updating their forecasts until Election Day. The political science modelers apply theory and evidence from the voting and elections literature to make their forecasts months before Election Day. How did each approach perform in 2016, and what does that tell us about the polling versus modeling methods?

First, we evaluate how the political science models fared in trying to forecast Hillary Clinton’s share of the two-party vote, now standing at 51 percent (as of December 2, 2016).  Overall, they did quite well.  Taking as a base the forecasts issued by modelers at the annual American Political Science Association meeting, we observe that seven of eleven were under one percentage point off. See Table 1, which draws on the forecasts made in the October issue of PS: Political Science and PoliticsAn additional three forecasts were less than two-and-a-half points away.  Really, just one could be considered something of an outlier, with an error over three points.  By way of contrast to all of these, the forecast from our Political Economy model appears to have called it exactly right, at 51 percentage points.

Each political science model can also be evaluated based on its standard error of the estimate (SEE), which gives an expected, or average, error of the model. The smaller the SEE, the more accurate the model, and if the 2016 case falls within the SEE then the model was accurate for this election. In eight of the ten models that reported an SEE, the 2016 election error was smaller than the model’s average error. For example, our Political Economy forecast model reported a standard error of estimate of 2.84, and the 2016 forecast had no error in its prediction. Overall, the performance of the political science modelers’ forecasts is remarkable.

The accuracy becomes more remarkable when the distance in days from the election itself receives consideration.  Looking again at Table 1, we see the median lead value was 78 days, a long time before the November 8 election date, when most poll aggregators were rolling out their final guesses.

Table 1 – Forecasts from Political Science Models

Forecasters Model(s)Predicted Two-Party Popular Vote for ClintonCertainty of Popular Vote PluralityStandard Error of EstimateDays Before ElectionError from 51.0% of two-party vote
Michael Lewis-Beck and Charles Tien. Political Economy51.00%83%2.84740
James Campbell. Convention Bump and Economy51.20%75%1.8474-0.2
James Campbell. Trial Heat and Economy50.70%69%1.67600.3
Brad Lockerbie. Economic Expectations and Political Punishment50.40%62%2.78 (MAE)1330.6
Christopher Wlezien and Robert Erikson. LEI and Polls. Pre-Conventions 51.80%72%2.86148-0.8
Bruno Jerôme and Veronique Jerôme-Speziari. State-by-State Political Economy50.10%50%4.641210.9
Christopher Wlezien and Robert Erikson. LEI and the Polls. Post-Conventions52.00%82%2.0378-1
Thomas Holbrook. National Conditions and Trial Heat52.50%81%2.19 (MAE)61-1.5
Andreas Graefe, J. Scott Amstrong, Randall Jones, and Alfred Cuzan. Pollyvote52.70%1.65 (MAE)63-1.7
Alan Abramowitz. Time for a Change48.60%66%1.741022.4
Helmut Norpoth. The Primary Model47.50%87%2463.5

How the poll aggregators performed

Now, how did the poll aggregators do in forecasting the 2016 election outcome?  Taking a look at the Electoral College vote forecasts, as listed by Upshot on their New York Times website, each had Clinton as having a 70 percent or better chance.  These efforts, all based on state and national polls, were wide of the mark.  Indeed, the venerable Princeton Election Consortium was wildly off, giving Clinton a 99 percent certainty of victory.

The poll aggregators that predicted the popular vote share of the total vote did not fare as badly as the Electoral College predictions. The Upshot had Clinton winning the national popular vote with 45.4 percent to Trump’s 42.3 percent and Libertarian candidate Gary Johnson’s five percent. FiveThirtyEight’s final update on November 8 in the polls-only forecast showed Clinton with 48.5 percent, Trump with 44.9 percent and Johnson with five percent, and Green Party candidate Jill Stein with 1.6 percent of the popular vote (their polls-plus forecast was virtually identical). The average of the election eve polls as reported on RealClearPolitics had Clinton leading in the four-way race 45.4 percent to 42.2 percent (Trump), to 4.7 percent (Johnson), and 1.9 percent (Stein).

Table 2 reports the forecasting errors of the popular vote forecasts for Clinton and Trump. The prediction error for Clinton’s share of the popular vote was 2.7 percent in The Upshot and RealClearPolitics, and only 0.4 percent in FiveThirtyEight.  The forecast errors for Trump, however, were larger in The Upshot and RealClearPolitics (3.9 and 4.0 percent). As these forecasts are based on aggregated polls, the size of these errors can be better understood by scrutinizing some of the individual polls on the eve of the election.

Table 2 – Accuracy of Poll Aggregators’ Forecasts of Popular Vote

(prediction & error from 48.1% actual vote) (prediction & error from 46.2% actual vote)
New York Times Upshot45.4%, 2.7%42.3%, 3.9%
FiveThirtyEight48.5%, -0.4%44.9%, 1.3%
RealClearPolitics average45.4, 2.7%42.2%, 4.0%

How the national polls performed

How wrong were the individual national polls for the 2016 presidential election? To answer this question, we evaluate ten national likely voter surveys taken in the month of November as reported in RealClearPolitics.  To help assess the error of each poll, we take as a standard its reported margin of error (MoE) at the 95 percent confidence interval. Then, we decide that, if the real vote for Clinton or Trump fell within that margin, the poll had an acceptable level of accuracy.

Results in Table 3 show that Clinton’s popular vote percentage (48.1 percent) was inside the poll’s margin of error in only three of the ten likely voter polls conducted in November. Table 4 shows that Trump’s vote percentage (46.2 percent) was inside the margin of error in only four of the ten final polls. Given how much attention is paid to polls in American elections, having accurate polls is a necessity. However, Tables 3 and 4 show that if election eve polls are any indication of overall poll accuracy, then there is significant room for improvement.

Table 3- Accuracy of Election Eve Polls for Clinton Prediction

PollMoEClinton PollClinton Poll error
ABC/Wash Post Tracking+/-2.547*
FOX News+/-2.548*
CBS News+/-3.0450.1
Rasmussen Reports+/-2.5450.6
NBC News/Wall St. Jrnl+/-2.7441.4
IBD/TIPP Tracking+/-3.1432
* = Actual percent of total vote as of 12/2/2016 (48.1 Clinton, 46.2 Trump) is within the poll's margin of error.

Table 4 – Accuracy of Election Eve Polls for Trump Prediction

PollMoETrump PollTrump Poll error
FOX News+/-2.544*
IBD/TIPP Tracking+/-3.145*
ABC/Wash Post Tracking+/-2.5430.7
Rasmussen Reports+/-2.5430.7
CBS News+/-3.0412.2
NBC News/Wall St. Jrnl+/-2.7403.5
* = Actual percent of total vote as of 12/2/2016 (48.1 Clinton, 46.2 Trump) is within the poll's margin of error.

Furthermore, this pattern of errors works against the increasingly widespread conclusion that the polls, after all, functioned as they should.  As Sean Trende of Real Clear Politics put it, “The story of 2016 is not one of poll failure.” Our results suggest otherwise, especially when coupled with the aggregated poll estimates across the entire electoral cycle, which virtually always put Clinton ahead; see, for example, the Real Clear Politics Four-Way National Poll Average from July 1, 2015 to November 3, 2016.  This consistent pro-Clinton outcome, from a very large number of samples, indicates systematic bias in the polling (i.e., imagine the textbook example where we attempt to estimate a population parameter via a repeated number of large random samples, in order to evaluate the desirable properties of the estimator).   It is hard to resist the implication that some fundamental mistakes are occurring at the level of sampling design and execution.  Future forecasts based entirely on polls need to be highly attentive to their methodologies, and transparent in their explanations and predictions, in order to secure public trust in their validity.

Featured image credit: Jason Short (Flickr, CC-BY-NC-SA-2.0)

Please read our comments policy before commenting.  

Note:  This article gives the views of the authors, and not the position of USApp– American Politics and Policy, nor of the London School of Economics.

Shortened URL for this post:


About the authors  

charles-tien-80x108Charles Tien – CUNY
Charles Tien is Professor of Political Science at Hunter College & The Graduate Center, City University of New York. He previously served as a Fulbright Scholar in American Politics at Renmin University in Beijing, China.


Michael Lewis-Beck 80x108Michael S. Lewis-Beck – University of Iowa
Michael S. Lewis-Beck is F. Wendell Miller Distinguished Professor of Political Science at the University of Iowa.  He has authored or co-authored over 270 articles and books, including Economics and Elections, The American Voter Revisited, French Presidential ElectionsForecasting ElectionsThe Austrian Voter and Applied Regression.

Print Friendly