The surprising outcome of the 2016 presidential election led many people to question the credibility of public opinion polling. That’s a fair question, considering so many polls suggested Hillary Clinton would win the election. Twelve of the thirteen final national polls predicted a Clinton victory. Indeed, some forecasters pegged her chances of winning at ninety percent, or even ninety-nine percent.
So what went wrong with those polls?
The truth is, for the most part, the polls were actually right! National polls suggested Hillary Clinton would have a three-percentage-point lead over Donald Trump. In reality, she finished with a two-point lead nationally. That’s not bad when we consider the myriad challenges pollsters face as they try to capture the mood of an erratic public. True, sometimes pollsters err. But often polls appear wrong because we interpret them incorrectly. It is on all of us as citizens to familiarize ourselves with how polls work, and what they can and cannot tell us.A slightly blurry photograph can still capture the general scene and the direction of the object, but the image will not perfectly reflect what is actually happening, nor necessarily what will happen. Polls are similar—they give us a fairly clear perspective on a given issue at a given time, but things can and do change.
Here’s the most important thing to remember about election polls: they are not instruments of prediction. Instead, think of them as slightly blurry photographs of an object in motion. The photograph tells us what is happening in one particular moment, at one particular time. A slightly blurry photograph can still capture the general scene and the direction of the object, but the image will not perfectly reflect what is actually happening, nor necessarily what will happen. Polls are similar—they give us a fairly clear perspective on a given issue at a given time, but things can and do change.
To be better consumers of polls in 2020, it is helpful to look at what happened in 2016. Let’s consider some of the reasons polls appeared to miss the mark during the last presidential election cycle.
Why might polls be off?
- The science of sampling. A valid poll requires a representative sample. Imagine you polled 1,000 people from a population of 300,000,000. If the population is, say, 50.5% female, then your sample should be 50.5% female. If 34% of the population has a college degree, then 34% of your sample should have a college degree. The only way to get a representative sample is to give everyone in the population the same probability of being picked to participate in a poll. This is extremely difficult to do. Some groups are harder to reach than others, such as lower income earners who may not be as available to talk to a pollster. In addition, some people may simply be less willing to answer the phone than others. Issues like non-response bias can spoil the representativeness of a poll, and lead to nearly all surveys being off a little bit, no matter how good the pollster is.
- Respondents may not be entirely truthful. A representative sample is no good if people do not tell the truth. In 2016, many observers thought there was a sizable number of voters who supported Donald Trump but did not want to admit it. While most of the subsequent analysis showed this “shy Trump voters” phenomenon had little or no impact on the election, the broader point is very important: humans are…well, human, and that includes not always being honest. For example, people often overreport their tendency to engage in “good” behaviors like voting or donating to charity. This is why the number of people who say they voted often outpaces the number of people who actually did.
- Election circumstance change quickly. Remember, polls are snapshots in time; the image can change quickly if you take the same picture the next day. In 2016, many voters made up their mind in the last few days before the election. Roughly 13 percent of voters in key battleground states of Wisconsin, Florida, and Pennsylvania made their decision in the last week before Election Day. Late-deciders broke for Donald Trump by a full 30 points in Wisconsin, and by 17 points in Florida and Pennsylvania. Therefore, even if a poll is representative and people are truthful, their answer in the poll may not align with what they do in the voting booth.
How to be an expert poll consumer in 2020
Do all these challenges render polls meaningless? Absolutely not. The problem isn’t that polls are inherently bad, it’s that they are inherently biased. Polls contain error that is rarely put into perspective on television, at a bar, or around the dinner table. To get the most out of the polls in 2020, we need to put their results in context. Here are some steps that will help you do so.
- Think local: focus on key swing state polls. National polls are interesting, and it’s true whichever candidate they show in the lead usually wins the election. But, remember that presidents are not picked via national popular vote. Instead, the president is determined by combining the results of 50 state elections (plus DC) across the country. Thus, national polls are not tracking statistics that reflect how votes are actually counted. So it is a good idea to focus on state polls in key swing states like Wisconsin, Michigan, and Pennsylvania. We know how most states will vote this November. Polls are most helpful in understanding the ones we do not.
- Find that margin of error. Because all polls have a high probability of being at least slightly inaccurate, they must include a margin of error. This is basically a prediction of how “off” your poll might be. For example, imagine you a see poll on the news that shows Joe Biden has the support of 52% of voters and Donald Trump has the support of 48% of voters. Somewhere on the screen you should see the margin of error. Let’s say it is 3% in our example. Add and subtract this error to the results for both candidates to create a range of numbers. For Biden, this range is 49% (52-3) to 55% (52+3). For Trump, this range is 45% to 51%. We can be confident that the true percentage of supporters for each candidate lies within this range. Our best guess is that 52% of the voters support Biden, but it is possible that the real percentage may be as low as 49% or has high as 55%.
- One other very important point: if the range for one candidate overlaps with the range of another candidate, the race is too close to call. That’s what we see in our example: Biden (49-55) and Trump (45-51) overlap a bit. So even though it looks like Biden is ahead, and he most likely is, it is still possible due to random sampling error that the candidates are tied, or even that Trump is winning. For an extended discussion, check out this primer on the margin of error in election polls.
- Ignore small-sample polls. The margin of error is directly related to sample size. The larger the sample, the smaller the error. Polls with fewer than 400 people often have margins of error so large that they cannot reliably detect which candidate is leading in the race. Look for polls with at least the 800 people, preferably closer to 1,000-1,200. Remember, though, that polls must be representative to be any good. Pollsters will take a 1,000-person poll that is representative over a 10,000-person poll that is not representative any day of the week.
- Conduct a “poll of polls.” A great way to minimize the potential for an outlier poll to overly influence your perception of the race is to look at polling averages, rather than just individual polls. A number of websites aggregate numerous polls to create a running national average, such as RealClearPolitics and FiveThirtyEight. Remember, if you see a poll with a highly unusual finding, ask yourself: why is only one poll finding this? It is likely because the result is a function of the poll itself, not the reality it purports to measure.
- Consider the source. Unfortunately, some polling firms do better work than others. Fortunately, there are ways to gauge firm quality: organizations like FiveThirtyEight maintain “Pollster Ratings” to help you quickly identify if a polling source is a reputable one. In a similar vein, remember that media pundits of various political persuasions may focus more on polls that show results they find favorable to their political side. Such polls may be outliers. The solution: consume a variety of media, consider numerous sources, and follow the polling aggregators.
Putting polls in perspective
Many critics of polling in the 2016 election unfairly indicted polling itself as a dubious enterprise. Sure, pollsters made some errors, but the truth is perfect polling is nearly impossible to achieve: representative samples are hard to get, respondents may not report future behavior accurately (intentionally or otherwise), and pollsters have to make best guesses that do not always work out. Therefore it is crucial that we inform ourselves about what polls can and cannot tell us, to remember that they are attempting to capture the views of an American public that are constantly evolving with new experiences, such as a global pandemic. In addition, remember that polls are snapshots in time, not perfect predictors of events. Just as weather data can give us a decent sense of what the temperature will be tomorrow, sometimes those projections are a bit off. In an election, being off a little bit sometimes means the candidate “behind” in the polls wins the race.
 Here are some more technical details: Scientific polls require researchers to make certain decisions, assumptions, and corrections regarding their samples and analyses. For instance, pollsters must determine which population of individuals to talk to. Many polls say they talk to “American adults” or “registered voters” or “likely voters.” But how do we define a “likely voter?” Someone who says they are going to vote? Someone who voted in the last election? The last three elections? Another issue pollsters must address is how to statistically adjust samples that suffer from a lack of representativeness. These corrections rely on educated assumptions that can sometimes be off. For instance, in 2016, many state polls did not adequately correct for an over-representation of highly educated voters. The result? These polls did not fully capture support for Donald Trump, who enjoyed support from many people with less formal education. Indeed, weighting surveys for education continues to be tricky business.
Eric Loepp is a guest contributor for the RAISE the Vote Campaign. The views expressed in the posts and articles featured in the RAISE the Vote campaign are those of the authors and contributors alone and do not represent the views of APSA.
Eric Loepp (Ph.D., University of Pittsburgh) is an assistant professor of political science at the University of Wisconsin-Whitewater, where he teaches courses in American government, political behavior, and research methods. His research focuses on candidate evaluations and electoral decision-making, particularly in primary elections. This work has been published in such journals as Electoral Studies, the Journal of Elections, Public Opinion, & Parties, Research & Politics, American Politics Research, and PS: Political Science & Politics.