Pew Research Center conducted this study to understand how errors in correctly representing the level of support for Joe Biden and Donald Trump in preelection polling could affect the accuracy of questions in those same polls (or other polls) that measure public opinion on issues. Specifically, if polls about issues are underrepresenting the Republican base the way that many 2020 preelection polls appeared to, how inaccurate would they be on measures of public opinion about issues? We investigated by taking a set of surveys that measured a wide range of issue attitudes and using a statistical procedure known as weighting to have them mirror two different scenarios. One scenario mirrored the true election outcome among voters (a 4.4-point Biden advantage, and another substantially overstated Biden’s advantage (a 12-point lead). For this analysis, we used several surveys conducted in 2020 with more than 10,000 members of Pew Research Center’s American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses that ensures that nearly all U.S. adults have a chance of selection. Questions in these surveys measured opinions on issues such as health care, the proper scope of government, immigration, race, and the nation’s response to the coronavirus pandemic. These opinions were examined to see how they differed between the two scenarios.
Most preelection polls in 2020 overstated Joe Biden’s lead over Donald Trump in the national vote for president, and in some states incorrectly indicated that Biden would likely win or that the race would be close when it was not. These problems led some commentators to argue that “polling is irrevocably broken,” that pollsters should be ignored, or that “the polling industry is a wreck, and should be blown up.”
The true picture of preelection polling’s performance is more nuanced than depicted by some of the early broad-brush postmortems, but it is clear that Trump’s strength was not fully accounted for in many, if not most, polls. Election polling, however, is just one application of public opinion polling, though obviously a prominent one. Pollsters often point to successes in forecasting elections as a reason to trust polling as a whole. But what is the relevance of election polling’s problems in 2020 for the rest of what public opinion polling attempts to do? Given the errors in 2016 and 2020, how much should we trust polls that attempt to measure opinions on issues?1
A new Pew Research Center analysis of survey questions from nearly a year’s worth of its public opinion polling finds that errors of the magnitude seen in some of the 2020 election polls would alter measures of opinion on issues by an average of less than 1 percentage point. Using the national tally of votes for president as an anchor for what surveys of voters should look like, analysis across 48 issue questions on topics ranging from energy policy to social welfare to trust in the federal government found that the error associated with underrepresenting Trump voters and other Republicans by magnitudes seen in some 2020 election polling varied from less than 0.5 to 3 percentage points, with most estimates changing hardly at all. Errors of this magnitude would not alter any substantive interpretations of where the American public stands on important issues. This does not mean that pollsters should quit striving to have their surveys accurately represent Republican, Democratic and other viewpoints, but it does mean that errors in election polls don’t necessarily lead to comparable errors in polling about issues.
How is it possible that underestimating GOP electoral support could have such a small impact on questions about issues?
Why did we choose to test a 12-point Biden lead as the alternative to an accurate poll?
We created a version of our surveys with an overstatement of Biden’s advantage in the election (a “tilted version”) to compare with a “balanced version” that had the correct Biden advantage of 4.4 percentage points. The 12 percentage point Biden lead used in the “tilted” version of the simulation is arbitrary, but it was chosen because it was the largest lead seen in a national poll released by a major news organization in the two weeks prior to Election Day, as documented by FiveThirtyEight. Several polls had Biden leads that were nearly as large during this time period. The simulation, including the manipulation of party affiliation among nonvoters, is described in greater detail below.
This finding may seem surprising. Wouldn’t a poll that forecast something as large as a 12 percentage point Biden victory also mislead on what share of Americans support the Black Lives Matter movement, think that the growing number of immigrants in the U.S. threatens traditional American customs and values, or believe global climate change is mostly caused by human activity?
The accuracy of issue polling could be harmed by the same problems that affected election polling because support for Trump vs. Biden is highly correlated with party affiliation and opinions on many issues. Pew Research Center has documented the steadily increasing alignment of party affiliation with political values and opinions on issues, a type of political polarization. It stands to reason that measures of political values and opinions on issues could be harmed by whatever it is that led measures of candidate preference to be wrong.
But “highly correlated” does not mean “the same as.” Even on issues where sizable majorities of Republicans and Democrats (or Trump and Biden supporters) line up on opposite sides, there remains more diversity in opinion among partisans about issues than in candidate preference. In recent elections, about nine-in-ten of those who identify with a political party vote for the presidential candidate of that party, a share that has grown over time. But that high degree of consistency between opinions on issues and candidate preference – or party affiliation – is rare. That fact limits the extent to which errors in estimates of candidate preference can affect the accuracy of issue polling.
Visualizing a closely divided electorate
Election polling in closely divided electorates like those in the U.S. right now demands a very high degree of precision from polling. Sizable differences in the margin between the candidates can result from relatively small errors in the composition of the sample. Changing a small share of the sample can make a big difference in the margin between two candidates.
To visualize how few voters need to change to affect the margin between the candidates, consider a hypothetical poll of 1,000 adults. One version shows Biden prevailing over Trump by 12 percentage points (left side of the figure), while the version on the right shows the accurate election results. Biden voters are shown as blue squares and Trump voters as red squares (votes for third-party candidates are shown in gray along the bottom), but the strip in the middle shows the voters who change from the left figure to the right one.
The version on the right shows the actual 2020 election results nationally – a Biden advantage of a little more than 4 percentage points. The poll on the right was created by slightly increasing the representation of Trump voters and decreasing the representation of Biden voters, so that overall, the poll changes from a 12-point Biden advantage to a 4-point Biden advantage. This adjustment, in effect, flips the vote preferences of some of the voters. How many voters must be “changed” to move the margin from 12 points to about 4 points?
The answer is not very many – just 38 of the 1,000, or about 4% of the total. The Biden voters who are replaced by Trump voters are shown as the dark blue vertical strip in the middle of the left-hand panel of the graphic (12-point victory) and dark red in the right panel (more modest 4-point victory).
In addition to shifting the margin in the race, this change in the sample composition has implications for all the other questions answered by the Trump and Biden voters. The Trump voters, whose numbers have increased statistically, now have a larger voice in questions about immigration, climate change, the appropriate size and scope of the federal government, and everything else in the surveys. The Biden voters have a correspondingly smaller voice.
But as may be apparent by comparing the pictures on the left and right, the two pictures of the electorate are quite similar. They both show that the country is very divided politically. Neither party has a monopoly on the voting public. Yet, while the division is fairly close to equal, it is not completely equal – Republicans do not outnumber Democrats among actual voters in either one. But the margin among voters is small. It is this closeness of the political division of the country, even under the scenarios of a sizable forecast error, that suggest that conclusions about the broad shape of public opinion on issues are not likely to be greatly affected by whether election polls are able to pinpoint the margin between the candidates.
Simulating two versions of political support among the public
To demonstrate the range of possible error in issue polling that could result from errors like those seen in 2020 election polling, we conducted a simulation that produced two versions of several of our opinion surveys from 2020, similar to the manipulation depicted in the hypothetical example shown above. One version included exactly the correct share of Trump vs. Biden voters (a Biden advantage of 4.4 percentage points) – we will call it the “balanced version” – and a second version included too many Biden voters (a Biden advantage of 12 percentage points, which was the largest lead seen in a public poll of a major polling organization’s national sample released in the last two weeks of the campaign, as documented by FiveThirtyEight). We’ll call it the “tilted version.”
But nearly all of Pew Research Center’s public opinion polling on issues is conducted among the general public and not just among voters. Nonvoters make up a sizable minority of general public survey samples. In our 2020 post-election survey, nonvoters were 37% of all respondents (8% were noncitizens who are ineligible to vote and the rest were eligible adults who reported not voting). It’s entirely possible that the same forces that led polls to underrepresent Trump voters would lead to the underrepresentation of Republicans or conservatives among nonvoters. Thus, we need to produce two versions of the nonvoting public to go along with our two versions of the voters.
Unlike the situation among voters, where we have the national vote margin as a target, we do not have an agreed-upon, objective target for the distribution of partisanship among nonvoters. Instead, for the purposes of demonstrating the sensitivity of opinion measures to changes in the partisan balance of the nonvoter sample, we created a sample with equal numbers of Republicans and Democrats among nonvoters to go with the more accurate election outcome (the Biden 4.4-point margin among voters), and a 10-point Democratic Party affiliation nonvoter advantage to go with the larger (and inaccurate) 12-point Biden margin among voters.2 These adjustments, in effect, simulate different samples of the public. In addition to the weighting to generate the candidate preference and party affiliation scenarios, the surveys are weighted to be representative of the U.S. adult population by gender, race, ethnicity, education and many other characteristics.3 This kind of weighting, which is common practice among polling organizations, helps ensure that the sample matches the population on characteristics that may be related to the opinions people hold.
The simulation takes advantage of the fact that our principal source of data on public opinion is the American Trends Panel, a set of more than 10,000 randomly selected U.S. adults who have agreed to take regular online surveys from us. We conducted surveys with these same individuals approximately twice per month in 2020, with questions ranging across politics, religion, news consumption, economic circumstances, technology use, lifestyles and many more topics. For this analysis, we chose a set of 48 survey questions representing a wide range of important topics on nine different surveys conducted during 2020.
After the November election, we asked our panelists if they voted, and if so, for whom. We also collect a measure of party affiliation for all panelists, regardless of their voter status. With this information, we can manipulate the share of Biden vs. Trump voters in each poll, and Democrats vs. Republicans among nonvoters, and look back at their responses to surveys earlier in the year to gauge how our reading of public opinion on issues differs in the two versions.
Before describing the results in more detail, it’s important to be explicit about the assumptions underlying this exercise. We can manipulate the share of voters for each presidential candidate and the share of Democrats and Republicans among nonvoters, but the results may not tell the full story if the Trump and Biden voters in our surveys do not accurately represent their voters in the population. For example, if believers of the internet conspiracy theories known as QAnon are a much higher share of Trump voters in the population than in our panel, that could affect how well our simulation reflects the impact of changing the number of Trump voters. The same is true for our adjustments of the relative shares of Democrats and Republicans. If the partisans in our panel do not accurately reflect the partisans in the general public, we may not capture the full impact of over- or underrepresenting one party or the other.
How much can the balance of these two scenarios affect measures of opinion on issues?
The adjustment from the tilted version (a 12-point Biden advantage with a 10-point Democratic advantage in party affiliation among nonvoters) to the balanced version (a 4.4-point Biden advantage with equal numbers of Democrats and Republicans among nonvoters), makes very little difference in the balance of opinion on issue questions. Across a set of 48 opinion questions and 198 answer categories, most answer categories changed less than 0.5%. The average change associated with the adjustment was less than 1 percentage point, and approximately twice that for the margin between alternative answers (e.g., favor minus oppose). The maximum change observed across the 48 questions was 3 points for a particular answer and 5 points for the margin between alternative answers.
One 3-point difference was on presidential job approval, a measure very strongly associated with the vote. In the balanced version, 39 percent approved of Trump’s job performance, while 58 percent disapproved. In the tilted version, 36 percent approved of Trump’s performance and 60 percent disapproved. Two other items also showed a 3-point difference on one of the response options. In the balanced version, 54% said that it was a bigger problem for the country that people did not see racism that was occurring, compared with 57% among the tilted version. Similarly, in the balanced version, 38% said that the U.S. had controlled the coronavirus outbreak “as much as it could have,” compared with 35% who said this in the tilted version. All other questions tested showed smaller differences.
Opinion questions on issues that have been at the core of partisan divisions in U.S. politics tended to be the only ones that showed any difference between the balanced version and the tilted version. Preference for smaller versus bigger government, a fundamental dividing line between the parties, differed by 2 points between the versions. Perceptions of the impact of immigration on the country, a core issue for Donald Trump, also varied by 2 points between the two versions. The belief that human activity contributes “a great deal” to global climate change was 2 points higher in the tilted version. The share of Americans saying that government should do more to help the needy was 2 points higher in the tilted version than the balanced version.
Despite the fact that news audiences are quite polarized politically, there were typically only small differences between the two versions in how many people have been relying on particular sources for news in the aftermath of the presidential election. The share of people who said that CNN had been a major source of news about the presidential election in the period after Election Day was 2 points higher in the tilted version than the balanced version, while the share who cited Fox News as a major source was 1 point higher in the balanced version than the tilted version.
The complete set of comparisons among the 48 survey questions are shown in the topline at the end of this report.
Why don’t big differences in candidate preference and party affiliation result in big differences in opinions on issues?
Opinions on issues and government policies are strongly, but not perfectly, correlated with partisanship and candidate preference. A minority of people who support each candidate do not hold views that are consistent with what their candidate or party favors. Among nonvoters, support among partisans for their party’s traditional positions – especially among Republicans – is even weaker. This fact lessens the impact of changing the balance of candidate support and party affiliation in a poll.
There’s almost never a one-to-one correspondence between the share of voters for a candidate and the share of people holding a particular opinion that aligns with the opinion of that candidate’s party. Three examples from a summer 2020 survey illustrate the point.
Asked whether they favor a larger government providing more services or a smaller government providing fewer services, nearly one-fourth of Biden’s supporters (23%) opted for smaller government, a position not usually associated with Democrats or Democratic candidates. On a question about whether the growing number of newcomers from other countries threatens American values or strengthens its society, nearly one-third of Trump’s supporters (31%) take the pro-immigrant view, despite the fact that the Trump administration took a number of steps to limit both legal and illegal immigration. And about one-fourth of Trump’s supporters (24%) say that it is the responsibility of the federal government to make sure all Americans have health care coverage, hardly a standard Republican Party position.
Shifting the focus to party affiliation among nonvoters, we see even less fidelity of partisans to issue positions typically associated with those parties. For example, nearly half of Republicans and independents who lean Republican but did not vote (47%) said that the growing number of immigrants from other countries strengthens American society. And 43% of them favor a larger government providing more services. A 55% majority of Republican nonvoters in this survey believe that it is the responsibility of the federal government to make sure that all Americans have health insurance coverage. This is still considerably smaller than the share of Democratic nonvoters who think the government is responsible for ensuring coverage (78%), but it is far more than we see among Republican voters.
These “defectors” from the party line, in both directions and among both voters and nonvoters, weaken the ability of changes in the partisan or voting composition of the sample to affect the opinion questions. Adding more Trump voters and Republicans also does add more skeptics about immigration, but nearly a third of the additional Trump voters say immigrants strengthen American society, a view shared by about half of Republican nonvoters. This means that our survey question on immigration does not change in lockstep with changes in how many Trump supporters or Republicans are included in the poll. Similarly, the Biden voter group includes plenty of skeptics about a larger government. Pump up his support and you get more supporters of bigger government, but, on balance, not as many as you might expect.
We want different things from opinion polls and election polls
Not all applications of polling serve the same purpose. We expect and need more precision from election polls because the circumstances demand it. In a closely divided electorate, a few percentage points matter a great deal. In a poll that gauges opinions on an issue, an error of a few percentage points typically will not matter for the conclusions we draw from the survey.
Those who follow election polls are rightly concerned about whether those polls are still able to produce estimates precise enough to describe the balance of support for the candidates. Election polls in highly competitive elections must provide a level of accuracy that is difficult to achieve in a world of very low response rates. Only a small share of the survey sample must change to produce what we perceive as a dramatic shift in the vote margin and potentially an incorrect forecast. As was shown in the graphical simulation earlier, an error of 4 percentage points in a candidate’s support can mean the difference between winning and losing a close election. In the context of the 2020 presidential election, a change of that small size could have shifted the outcome from a spot-on Biden lead of 4.4 points to a very inaccurate Biden lead of 12 points.
Differences of a magnitude that could make an election forecast inaccurate are less consequential when looking at issue polling. A flip in the voter preferences of 3% or 4% of the sample can change which candidate is predicted to win an election, but it isn’t enough to dramatically change judgments about opinion on most issue questions. Unlike the measurement of an intended vote choice in a close election, the measurement of opinions is more subjective and likely to be affected by how questions are framed and interpreted. Moreover, a full understanding of public opinion about a political issue rarely depends on a single question like the vote choice. Often, multiple questions probe different aspects of an issue, including its importance to the public.
Astute consumers of polls on issues usually understand this greater complexity and subjectivity and factor it into their expectations for what an issue poll can tell them. The goal in issue polling is often not to get a precise percentage of the public that chooses a position but rather to obtain a sense of where public opinion stands. For example, differences of 3 or 4 percentage points in the share of the public saying they would prefer a larger government providing more services matter less than whether that is a viewpoint endorsed by a large majority of the public or by a small minority, whether it is something that is increasing or decreasing over time, or whether it divides older and younger Americans.
How do we know that issue polling – even by the different or more lenient standards we might apply to them – is accurate?
The reality is that we don’t know for sure how accurate issue polling is. But good pollsters take many steps to improve the accuracy of their polls. Good survey samples are usually weighted to accurately reflect the demographic composition of the U.S. public. The samples are adjusted to match parameters measured in high-quality, high response rate government surveys that can be used as benchmarks. Many opinions on issues are associated with demographic variables such as race, education, gender and age, just as they are with partisanship. At Pew Research Center, we also adjust our surveys to match the population on several other characteristics, including region, religious affiliation, frequency of internet usage, and participation in volunteer activities. And although the analysis presented here explicitly manipulated party affiliation among nonvoters as part of the experiment, our regular approach to weighting also includes a target for party affiliation that helps minimize the possibility that sample-to-sample fluctuations in who participates could introduce errors. Collectively, the methods used to align survey samples with the demographic, social and political profile of the public help ensure that opinions correlated with those characteristics are more accurate.
As a result of these efforts, several studies have shown that properly conducted public opinion polls produce estimates very similar to benchmarks obtained from federal surveys or administrative records. While not providing direct evidence of the accuracy of measures of opinion on issues, they suggest that polls can accurately capture a range of phenomena including lifestyle and health behaviors that may be related to public opinion.
But it’s also possible that the topics of some opinion questions in polls – even if not partisan in nature – may be related to the reasons some people choose not to participate in surveys. A lack of trust in other people or in institutions such as governments, universities, churches or science, might be an example of a phenomenon that leads both to nonparticipation in surveys and to errors in measures of questions related to trust. Surveys may have a smaller share of distrusting people than is likely true in the population, and so measures of these attitudes and anything correlated with them would be at least somewhat inaccurate. Polling professionals should be mindful of this type of potential error. And we know that measures of political and civic engagement in polls are biased upward. Polls tend to overrepresent people interested and engaged in politics as well as those who take part in volunteering and other helping behaviors. Pew Research Center weights its samples to address both of these biases, but there is no guarantee that weighting completely solves the problem.
Does any of this suggest that under-counting Republican voters in polling is acceptable?
No. This analysis finds that polls about public opinion on issues can be useful and valid, even if the poll overstates or understates a presidential candidate’s level of support by margins seen in the 2020 election. But this does not mean that pollsters should quit striving to have their surveys accurately represent Republican, Democratic and other viewpoints. Errors in the partisan composition of polls can go in both directions. As recently as 2012, election polls slightly underestimated Barack Obama’s support.
Despite cautions from those inside and outside the profession, polling will continue to be judged, fairly or not, on the performance of preelection polls. A continuation of the recent underestimation of GOP electoral support would certainly do further damage to the field’s reputation. More fundamentally, the goal of the public opinion research community is to represent the public’s views, and anything within the profession’s control that threatens that goal should be remedied, even if the consequences for estimates on topics other than election outcomes are small. Pew Research Center is exploring ways to ensure we reach the correct share of Republicans and that they are comfortable taking our surveys. We are also trying to continuously evaluate whether Republicans and Trump voters – or indeed, Democrats and Biden voters – in our samples are fully representative of those in the population.
Limitations of this analysis
One strength of this analysis is that the election is over, and it’s not necessary to guess at what Trump support ought to have been in these surveys. And by using respondents’ self-reported vote choice measured after the election, we avoid complications from respondents who may have changed their minds between taking the survey and casting their ballot.
However, this study is not without its limitations. It’s based on polls conducted by only one organization, Pew Research Center, and these polls are national in scope, unlike many election polls that focused on individual states. The underlying mechanism that weakens the association between levels of candidate support (or party affiliation) and opinions on issues should apply to polls conducted by any organization at any level of geography, but we examined it using only our surveys.
Another important assumption is that the Trump voters and Biden voters who agreed to be interviewed are representative of Trump voters and Biden voters nationwide with respect to their opinions on issues. We cannot know that for sure.