How far off were the polls in the 2020 election?


Many national and state polls failed to accurately predict the 2020 election results, which suggested a more apparent outcome in favor of Joe Biden.

This is the second time in which we have had an election in which the polls seem to have underestimated President Donald Trump – which may have had potentially significant consequences for how people voted. In the run up to the 2020 election, many polls showed Joe Biden and Democrats with a significant lead, both nationally and in multiple swing states. According to FiveThirtyEight, however, Trump over performed in its national and swing-state polls by a median of 6 points. Pollsters are now confronting the question of how far off polls were and what caused it (NPR).

A few weeks before Election Day 2020, dozens of polls showed Joe Biden as the clear leader over President Donald Trump in many battleground states. It also showed Democrats were likely to gain the majority in the Senate and pick up seats in the House. Based on the votes that were counted so far, those polls were off. Though Biden is projected to win the Presidency, the 2020 polling consensus held Biden to be a considerable favorite. Statistical aggregator FiveThirtyEight gave Biden an 89% chance of winning and estimated that he would win the popular vote by 8 percentage points as well as claim Florida, a state that ultimately went to Trump. The polls averaged by RealClearPolitics predicted that he would win the popular vote by 7.2 percentage points and listed as “toss ups” a number of states that wound up being solidly in Trump’s favor, including Florida, Ohio, Iowa and Texas.

Political analyst Charlie Cook epitomized the prevailing view when he wrote on Election Day that Biden’s “actual lead is more like 9 or 10 points, based on the higher-quality, live-telephone-interview national polls conducted since the first debate” and that his “path to 270 electoral votes seems pretty straightforward.” After the polling debacle that was 2016, pollsters spent years trying to hone their models to account for the errors that widely predicted a Clinton victory. The data suggests they did not succeed. 

“Clearly, Trump’s support was underestimated again,” says Melinda Jackson, the Associate Dean of the Political Science Department at San Jose State University. She is an expert in political psychology and polling. Jackson acknowledges the Presidential polling was off in 2016 and 2020 and says one of the biggest reasons why is because both years Trump was on the ballot. “Donald Trump frequently talks about not trusting the media, when a lot of these polls are done by media organizations and universities, you could see how a Trump supporter might think, nope, I’m not participating in this,” says Jackson.

There are a number of possible explanations why pollsters were so far off, says Nick Beauchamp, assistant professor of political science at Northeastern. One is that specific polls undercounted the extent to which certain demographics – such as Hispanic voters in specific states – shifted toward President Trump. Another is that, just as in 2016, polls undercounted hard-to-reach voters who tend to be more conservative. Beauchamp is less convinced that “shy” Trump voters deliberately misrepresented their intentions to pollsters. Political experts say this year was a very different election year for a number of reasons. The pandemic led to record levels of mail-in ballots and early voting, which could be part of the reason why the polls were so off. Those same experts say that doesn’t explain why the polls got it so wrong two presidential elections in a row, however.

“Whatever the cause, it has huge implications not just for polling, but for the outcome of the presidential and Senate elections,” Beauchamp says. “If the polls have been this wrong for months, since they have been fairly stable for months, that means that campaign resources may have also been misallocated.” Beauchamp pointed to a tweet by political pollster Josh Jordan, which showed just how much Trump over-performed the FiveThirtyEight averages in nine swing states. In Ohio, for example, he ran seven points better. In Wisconsin, it was eight points. “Trump over-performed relative to the polls in these states by a median of 6 points,” says Beauchamp. “That’s a shockingly large error, though in other states it may have been smaller.” This year’s polling errors, Beauchamp said, were “enormous” even compared to 2016, when polls failed to predict Trump’s defeat of Democratic nominee Hillary Clinton.

Just days before the presidential contest, FiveThirtyEight founder Nate Silver predicted that Biden was slightly favored to win Florida and its 29 Electoral College votes. The race was called on election night with Trump comfortably ahead. “I think Nate Silver and the other pollsters are saying ‘Well that’s just within the bounds of systematic polling,’ but it seems awfully large to me for just that,” Beauchamp says. Now, election watchers and media leaders are questioning the value of polling overall. Washington Post media columnist Margaret Sullivan wrote that “we should never again put as much stock in public opinion polls, and those who interpret them, as we’ve grown accustomed to doing.”

The results of the 2018 elections showed the polls were right on with their predictions, so pollsters thought they’d fixed the problems. Fast forward to the 2020 presidential election and the polls are off again, so those same pollsters will be going back again to try to figure out what happened and where they went so wrong (CBS).

Polling misses aren’t unique to the United States, Beauchamp points out. Polls also failed to predict the 2015 elections in the United Kingdom, as well as the UK’s 2016 “Brexit” vote to exit the European Union. In those cases, as with the 2016 US election, Beauchamp said, pollsters “made these mistakes, which are relatively small, but in the same direction and with fairly significant effects.” To avoid a repeat of 2016 and 2020 in the United States, Beauchamp says, pollsters should shift their tactics – and perhaps attach different weights to factors they’re trying to measure, such as social distrust or propensity to be a non-voter (News Northeastern).


Leave a Reply