James Bell, Director of International Survey Research for the Pew Research Center, explains the methodology used by the Pew Research Center’s Global Attitudes Project to assure the quality and accuracy of surveys conducted abroad.
Q. Each year, the Pew Research Center’s Global Attitudes Project conducts public opinion polls around the world. Isn’t it difficult to poll in some countries? And how confident are you in the poll findings?
A: There is no question that the process of fielding surveys in foreign countries can be challenging and even difficult, especially as the majority of our polls are conducted through face-to-face interviews. That said, the rigorous quality standards we apply to our Pew Global Attitudes survey work abroad are the same as those we apply here in the U.S. Namely, we strive to ensure that our surveys accurately represent the populations of individual countries, and that we ask questions that deliver valid information about how people view important events and issues of the day.
The task of ensuring our surveys represent the adult-age population a foreign country requires employing rigorous sampling methods. The first step is to work closely with our principal research partners to identify local polling firms with knowledgeable staff and proven track-records in designing large survey samples. The good news is that political and economic changes over the past two decades have generally increased the demand for social and market research and have made it easier for us to find local, capable research firms.
The not-so-good news is that, compared with the U.S., the penetration of landline phones in many countries has not gotten to a point at which we have felt comfortable fielding national surveys by phone. And while mobile phones are quickly becoming an integral part of life for many people around the globe, reliable information about mobile users and how to integrate them into national samples remains limited. Thus, with the exception of a few countries such as Britain, France, Germany and Japan, we have erred on the side of reliability and have administered our surveys in person, rather than by phone. This means that unlike national surveys in the U.S., which can be completed in just days, most Global Attitudes surveys take two or more weeks to complete.
Due to their use of proven sampling techniques, the local vendors we work with can achieve nationally representative surveys by conducting face-to-face surveys with about 1,000 respondents. The key is ensuring that these respondents are selected in a random, unbiased fashion – and that all adult members of a country’s population are eligible for inclusion in a survey. More often than not, we meet these requirements by using multi-stage, cluster samples.
What this means is that rather than randomly selecting individuals directly (by phone, for example), we first randomly select clusters of individuals – beginning with relatively large territorial units, akin to counties in the U.S. Once these primary clusters are selected, we randomly select smaller territorial units, until we work our way down to city blocks or villages. At this stage, interviewers either visit addresses selected randomly from a list, or they follow a so-called “random walk” in which they visit every third or fourth residence along a set route. At each residence, interviewers randomly select a respondent by using a Kish grid (a detailed list of all household members) or by selecting the adult who has had the most recent birthday.
As mentioned above, in most countries surveyed by Pew Global Attitudes, multi-stage, cluster samples are used to field nationally representative surveys. However, in a few instances we are unable to conduct full, national surveys face-to-face. Sometimes, the limiting factors are cost and time. In China, for example, it would take many weeks to collect face-to-face interviews from across the country, and it would be prohibitively expensive to transport trained interviewers long distances. Therefore, for now our surveys represent only 57% of the Chinese population (those mostly in urban areas). We hope to expand our coverage in the future.
In other instances, concern for the safety of interviewers keeps us from fielding truly national surveys. In Pakistan, for example, our surveys exclude 15% of the population due to concerns about sending interviewers into frequently violent border regions. As in China, our survey respondent base is predominantly urban because of this. But regardless of the scope, our surveys in China and Pakistan are held to the same methodological standards as our surveys conducted elsewhere.
One of the quality checks we perform for all Pew Global Attitudes surveys is to compare the demographic characteristics of the people included in our surveys with census or other official data that describe the gender, age or educational makeup of the population. In cases where our data departs more than a few percentage points from official statistics, we may decide to adjust our data through the mathematical procedure of weighting. Weighting is a common practice in survey research. Properly applied, it can improve our ability to accurately report how prevalent or variable attitudes are in a given society.
For all our surveys, of course, we calculate country-specific margins of sampling error, based on the number of people surveyed and whether the survey was based on random dialing by phone or a multi-stage, cluster sample. These margins of error are integral to our ability to identify attitudinal shifts or statistically-significant differences. Over the years, our key trends have proven highly reliable; they have moved in directions that track well with political and economic developments or have remained relatively stable in the absence of major events or changes at the local, regional or global level. This is another reason we stand behind the accuracy of our data.
Beyond the technical aspects of sample design, our confidence in Pew Global Attitudes’ findings stems from the time and effort we put into the design and translation of our questionnaires. In a given year, our survey questions may be translated into twenty or more languages. Coordinating so many translations can be daunting. Fortunately, since the Pew Global Attitudes Project began in 2002, we have assiduously archived translated versions of the annual questionnaire. As we develop each new round of the survey, we are able to draw on these tried-and-tested translations when we repeat specific items or trends.
But past translations are of little help when it comes to new questions or new countries. In such instances, we rely on local polling firms to first translate our questions into the appropriate local language or languages. As a standard operating procedure, we then have the translation re-translated into English by a bilingual person who has not seen the original English-language questionnaire. This is called back-translation, and it is highly useful for identifying questions that have been poorly or incorrectly translated. For complex questions or especially challenging languages, we often take the additional step of consulting with linguistic experts to perfect translation of new items.
Regardless of language, the goal of the translation process is always the same: to ensure that we ask questions that reflect our intended meaning, with results that can be compared cross-nationally.
A final reason for confidence in our findings is the careful training of interviewers and close supervision of fieldwork. In each country, prior to fieldwork, local research firms train their interviewers to properly administer the questionnaire. This includes briefing interviewers on the overall purpose of the survey, the intent of specific questions and how to manage both asking questions and recording answers. In the case of both phone and face-to-face surveys, interviewers participate in mock interviews in order to gain familiarity with the questionnaire. These training sessions can highlight ways to improve the administration of the survey so that questions are clearly communicated and answers correctly recorded.
Once fieldwork begins, interviews are regularly monitored by supervisors. For phone surveys, this typically involves a supervisor listening in to a live interview or calling back a respondent to verify that an interview was completed with an eligible individual. Similarly, with respect to face-to-face surveys, supervisors will travel with interview teams to urban neighborhoods or rural villages to make certain that interviewers visit residences randomly drawn from a list or selected randomly from a pre-determined route. Supervisors will also later visit a certain percentage of residences to confirm that eligible individuals have been interviewed.
Quality checks during the survey administration process help ensure that Pew Global Attitudes surveys reach their target populations and ask the intended questions. By using careful field supervision, appropriate sample design and thorough translations, we can be sure that our surveys accurately represent public opinion around the world.