A behind-the-scenes blog about research methods at Pew Research Center.

When the unexpected happens, what’s a survey researcher to do?

High-quality public opinion surveys can take weeks to conduct. Pew Research Center’s Global Attitudes Survey, for example, is fielded simultaneously in 30 or more countries each year, and fieldwork can take more than eight weeks in some locations. The potential for a significant, unexpected event to occur during the interviewing phase of a survey is a looming challenge for survey researchers — especially as certain events might make the data collected before the event less comparable with what’s collected after.

Unexpected events can take myriad forms, ranging from weather events that might affect fieldwork to global summits, terrorist attacks or snap elections (with potential implications for attitudes toward political leaders, trust in the government, national priorities and foreign relations).

While these types of events certainly may cause researchers some (hopefully temporary) headaches, they may also provide a unique opportunity to look at how events can cause changes in people’s opinions, serving as a type of “natural experiment.”

Here are two recent examples of this from our international survey work and how we approached them.

Phone survey in South Korea: The case of the Singapore Summit

On June 12, 2018, U.S. President Donald Trump and North Korean leader Kim Jong-un held the first-ever meeting between a U.S. president and North Korean head of state. The meeting, held in Singapore and abruptly canceled then rescheduled, took place about two weeks after we had begun our fieldwork for a survey in South Korea. The Trump-Kim meeting was consequential in South Korea, where President Moon Jae-in pledged support for the talks and expressed optimism about a “new chapter of peace and cooperation.”

Given the praise by the South Korean leadership and the significance of the meeting, we expected that South Koreans’ confidence in Trump might be higher following the event than before it, perhaps inflating public opinion and giving us a misreading when compared to our year-over-year trend. But how could we assess that more precisely?

Our South Korean survey was conducted via mobile phone, and dialing wasn’t designed to be concentrated demographically or geographically at specific points during fieldwork. That meant we could expect phone numbers called before and after the Trump-Kim meeting to result in relatively similar respondents.

Our practice in South Korea is to call every phone number up to seven times to increase the chances that harder-to-reach populations — for example, younger people or those who work long hours — are represented in the final sample of respondents. Since that’s the case, there was a possibility for demographic differences between those we reached earlier in the survey period and those we reached later.

When we examined these unweighted demographic differences, we saw that, by and large, the people reached before the summit and afterward had similar profiles. Given this, we also created separate post-stratification weights for those interviewed before and after the summit.

Having established broad demographic similarity of the two groups, we looked at whether the summit affected respondents’ evaluations of Trump. Before the meeting, 43% had confidence in Trump regarding world affairs, compared with 48% who reported confidence following it. This difference was not statistically significant at the 95% confidence level. This may have been due in part to the relatively limited sample size of the post-summit group, so we examined it more closely via two other checks.

First, we conducted a “placebo” test of sorts by asking ourselves whether South Koreans’ opinions of other world leaders — which we would not expect to be affected by the Singapore Summit — also were different between the time periods in question. As an example, we looked at President Shinzo Abe of Japan, who was not involved in the summit. Among South Koreans, confidence in Abe did not change at all: It was 10% both before and after the Trump-Kim meeting.

Second, we conducted a regression, which allowed us to control for multiple demographic factors concurrently, giving us more confidence that differences we saw with regard to Trump were related to when people were surveyed, rather than who was surveyed.

Results of the regression suggest that even when controlling for age, sex and education, South Koreans interviewed after the summit had more confidence in Trump than those interviewed before the meeting.

Face-to-face surveys in Greece: The case of the snap election

During our 2019 fieldwork for a nationally representative survey of Greek adults, the country’s leaders scheduled a parliamentary snap election that fell midway through our interviewing period. This provided a chance for us to explore some key themes before and after the election, such as general satisfaction with democracy. It also allowed us to analyze how satisfaction with democracy changed among “electoral winners” (those whose preferred candidate or party won) and “electoral losers.”

Unlike South Korea, fieldwork in Greece was conducted through face-to-face interviews in people’s homes. Fieldwork was not rolled out around the country randomly, but instead prioritized certain regions and times because of weather conditions or transportation logistics.

Given this reality, we were concerned that any differences we saw before and after the election were simply because the people interviewed in one region of the country had significantly different attitudes from those in another, regardless of the snap election.

One way to understand these challenges is to look at people’s attitudes before and after the election only in regions where interviews took place in both time periods. For example, if interviews were conducted in the broader Athens region both pre- and post-election, we could feel more confident comparing opinions within Athens to one another.

When we looked just at the geographic regions that had interviews both pre-election and post-election, we saw that the age and education profiles of the respondents were relatively similar. Once again, given these differences, we created separate post-stratification weights for before and after the election.

We were also concerned that some of these geographic regions were quite large and that even within Athens, neighborhoods might differ from one another. Even though we typically return to each household up to three times and over multiple days to complete an interview, there were very few neighborhoods that had interviews before and after the snap election, making statistical inference at that level challenging. Analyzing at the regional level could offer insights, but there were many potential confounding factors that required consideration, such as age, gender, education and ideology. Once again, we returned to using regressions.

Using regressions helped us feel confident that any differences we observed were related to the snap election rather than to different types of people being interviewed at different times. It also allowed us to explore whose attitudes shifted following the election. By interacting our variable indicating when the election took place with the respondent’s ideology, we saw that people on the ideological right appear more satisfied post-election. In other words, those who were more ideologically similar to the center-right party that won the election seem more satisfied with democracy after the results were known. In contrast, people on the left were similarly satisfied with democracy before and after the election.

Other approaches

These are not the only ways to look at differences in public opinion before and after unexpected events like the Singapore Summit or a snap election. For example, researchers might consider a regression discontinuity design, depending on the type of event. If a survey contains measures of political interest or news attentiveness, for example, researchers might also be able to look at changes in attitudes among the more-attuned population, in particular. And if a survey also includes benchmarks, these comparisons might prove helpful in understanding how changes pre- and post-event relate to known parameters.

More from Decoded

About Decoded

A behind-the-scenes blog about research methods at Pew Research Center.

Copyright 2022 Pew Research Center