What is the American Trends Panel (ATP)?
The ATP is Pew Research Center’s nationally representative online survey panel. The panel is composed of more than 10,000 adults selected at random from across the entire U.S.
Respondents have been recruited over the years, and they take our surveys frequently. The panel provides a relatively efficient method of data collection compared with fresh samples because the participants have already agreed to take part in more surveys. The major effort required with a fresh sample – making an initial contact, persuading respondents to take part and gathering the necessary demographic information for weighting – is not needed once a respondent has joined a panel. Another advantage of the ATP is that considerable information about panelists’ views and experiences can be accumulated over time. Because panelists may respond to multiple surveys on different topics, it is possible to build a much richer portrait of the public than is feasible in a single survey interview, which must be limited in length to prevent respondent fatigue.
But panels like the ATP have some limitations as well. They can be expensive to create and maintain, requiring more extensive technical skill and oversight than a one-off survey. A second concern is that panelists may drop out over time (known as attrition), making the panel less representative of the target population as time passes if the kinds of people who drop out are different from those who tend to remain. The ATP features an annual recruitment for new panelists from across the country to address this.
Another concern is that repeated questioning of the same individuals may yield different results than we would obtain with independent or “fresh” samples. If the same questions are asked repeatedly, respondents may remember their answers and feel some pressure to be consistent over time. The reverse is also a concern, as respondents might become “conditioned” to change their behavior because of questions asked previously. For example, questions about voting might spur them to register to vote. Respondents also become more skilled at answering particular kinds of questions. This may be beneficial in some instances, but to the extent it occurs, the panel results may be different from what would have been obtained from independent samples of people who have not had the practice in responding to surveys. Fortunately, research has detected no meaningful conditioning on the ATP.
Recruiting panelists to the ATP
The ATP was created in 2014, with the first cohort of panelists invited to join the panel at the end of a large, national, landline and cellphone random-digit-dial survey that was conducted in both English and Spanish. Two additional recruitments were conducted using the same method in 2015 and 2017, respectively.
In 2018, the ATP switched from telephone to address-based recruitment. Invitations are sent to a random, address-based sample (ABS) of households selected from the U.S. Postal Service’s Delivery Sequence File (DSF).
Randomization in sampling is carried through right down to the household level to maintain representativeness of sample. The adult with the next birthday in each selected household is asked to go online to complete a survey, at the end of which they are invited to join the panel.
For the online panel to be truly nationally representative, the share of those who do not use the internet must be represented on the panel somehow. In 2021, the share of non-internet users in the U.S was estimated to be 7%, and while this is a relatively small group, its members are quite different demographically from those who go online. In its early years, the ATP conducted interviews with non-internet users via paper questionnaires. However, in 2016, the Center switched to providing non-internet households with tablets which they could use to take the surveys online. The Center works with Ipsos, an international market and opinion research organization, to recruit panelists, manage the panel and conduct the surveys.
Drawing samples for ATP surveys
One of the benefits of a large panel like the ATP is that there are more panelists than a typical survey requires. Rather than selecting all 10,000-plus panelists each time, many ATP surveys interview only a subset (e.g., 2,500) of the panelists. This reduces the burden on individual panel members, sparing them from having to respond every time the Center fields a survey.
Drawing subsamples (rather than interviewing everyone on the panel) also allows Center researchers to make the samples more representative. Like most survey panels, the ATP has proportionately too many of some groups (e.g., college-educated individuals) and proportionately too few of others (e.g., young adults). ATP subsamples address these imbalances so that the responding sample looks quite like the U.S. public overall. This produces a sample that requires less weighting to align it with the population and, thus, a larger effective sample size.
Fielding ATP surveys
It takes about seven days for a questionnaire to be translated, programmed and made ready for its first phase of testing. The main objective during testing is to make sure that questions render on screen the way they ought to, and that question logic and skip patterns are all in place. For example, if the response to “How many people live in your household, including you?” is “1,” subsequent questions asking about other household members, such as “Are you the parent or guardian of any children under the age of 12 who live in your household?” should be skipped. Questions that require a number entry are also checked to make sure that incorrect formats (for example, decimals) are not an option when asking questions such as “What year were you born?” Lastly, this stage of testing is when most formatting changes and typos are caught and flagged for correction. Certain questions are programmed differently for mobile devices so that questions and text appear legibly without the need for horizontal, and sometimes (excessive) vertical scrolling.
A day or two are dedicated to fixing any programming errors that may have been detected during testing, as well as to updating the draft questionnaire and graduating it to a final stage. Programmers then send final test links (called “end to end” links) for final checking before survey launch. This round of testing verifies that the survey is working on all of the most common web browsers (e.g., Edge, Chrome, Mozilla, Safari) and devices (e.g., personal computers, Android phones, iPads, iPhones). Researchers also conduct breakoff tests to ensure that respondents can continue their survey at a later stage and on a different device.
Once testing is completed, the survey is ready for fielding. This process begins with a “soft launch,” wherein about 60 reliably fast ATP panelists are notified that the survey is ready. The soft launch typically takes place a day before full launch of the survey. Data quality checks are performed on this initial dataset to check the data format and to allow sufficient time to course correct any issues in the program that may be flagged before the survey gets sent out to all selected panelists for that survey. Survey invitations are sent out on the day the survey launches via email and, for panelists who consent to receiving SMS messages, via text messages as well. Several days after launch, panelists are sent up to two email or text reminders if they do not respond to the survey. From time to time, interactive voice recording reminder calls are also made to tablet households that previously provided consent to receive these reminders.
Data collection usually closes six to 14 days after full launch, depending on the research needs. Researchers then conduct a series of data quality checks to flag any issues with respondent satisficing or how answers were recorded in the dataset. All respondents are offered a post-paid incentive for their participation. Respondents choose to receive the post-paid incentive in the form of a check or a gift code to Amazon.com. Incentive amounts range depending on whether the respondent belongs to a part of the population that is harder or easier to reach. Differential incentive amounts are designed to increase panel survey participation among groups that traditionally have low survey response propensities.
Weighting ATP surveys
The ATP data is weighted in a multistep process that accounts for multiple stages of sampling and nonresponse that occur at different points in the survey process. First, each panelist begins with a base weight that reflects their probability of selection for their initial recruitment survey (and the probability of being invited to participate in the panel in cases where only a subsample of respondents were invited). The base weights for panelists recruited in different years are scaled to be proportionate to the effective sample size for all active panelists in their cohort. To correct for nonresponse to the initial recruitment surveys and gradual panel attrition, the base weights for all active panelists are calibrated to align with the population benchmarks identified in the accompanying table to create a full-panel weight.
For ATP waves in which only a subsample of panelists are invited to participate, a wave-specific base weight is created by adjusting the full-panel weights for subsampled panelists to account for any differential probabilities of selection for the particular panel wave. For waves in which all active panelists are invited to participate, the wave-specific base weight is identical to the full-panel weight.
In the final weighting step, the wave-specific base weights for panelists who completed the survey are again calibrated to match population benchmarks. The Center calibrates ATP surveys to both demographic benchmarks (e.g., age, education, sex, race, ethnicity, geography) and non-demographic benchmarks (e.g., political party affiliation, religious affiliation, registered voter status, volunteerism). These weights are then trimmed (typically at about the 1st and 99th percentiles) to reduce the loss in precision stemming from variance in the weights. Sampling errors and test of statistical significance take into account the effect of weighting.
Maintaining the ATP
Pew Research Center works with Ipsos to recruit panelists, manage the panel and conduct the surveys. Every year, we aim to “refresh” the panel by adding new panelists. Panelists also take an annual profile survey where they are given the opportunity to update certain aspects of their profile, such as their income, number of children, etc. If a panelist does not participate in the annual profile survey, they get retired from the panel. Occasionally, we also retire members that are demographically overrepresented on the panel, such as those who are more educated. We do this to ensure representation on the panel that resembles the general population, and to avoid having to weight certain panelists too heavily. We also retire panelists that have been inactive for a period and those who get flagged as repeat offenders on surveys for behaviors such as high refusal rates (skipping 80% or more questions) on two or more recent surveys.
We make a promise to our panelists to protect their identity. Several checks and balances are in place to make sure that Pew Research Center remains true to its word. Personal identifying information (PII) such as a panelist’s name or county of residence is maintained solely by the core panel administration team and is never made available to the general public. In some cases, additional steps such as data swapping – randomly swapping certain values among a small number of respondents with similar characteristics for sensitive questions – is also used to protect panelists’ information.
Learn more here about the history of the American Trends Panel, including its creation, development, and growth.