The American Trends Panel Survey Wave 14.5
The experiential study consisted of 14 short online surveys that were administered two per day from Feb. 24 through March 1, 2016. The January wave of the panel was conducted by Pew Research Center in association with the John S. and James L. Knight Foundation. Survey invitations were sent at different times each day, and responses were accepted for two hours after the invitations were sent. Panelists who completed the January wave on the web and reported that they get news online (from a desktop or laptop computer or mobile device) were asked to participate in the experiential study. Of the 4,236 respondents who were asked, 3,827 agreed to participate in the experiential study.8 The analysis in this report relies on the 2,078 panelists who completed at least 10 of the 14 surveys.
For the experiential study, the data were weighted using a similar process to the full January wave. The base weight accounting for the initial probability of selection was adjusted to account for the propensity to have completed 10 or more of the experiential study surveys. The data were then weighted to match all online news users from the January wave on the following variables: gender, age, education, race and Hispanic ethnicity, region, population density, telephone service, internet access, frequency of internet use, volunteerism, party affiliation and the use of 10 different social networking sites for news.
Repeated questions over a week could theoretically condition respondents to answer differently. To test this, we compared the responses of users (who had answered at least 10 surveys) in the beginning of the week to those at the end of the week. One case where there was a key decrease in a type of response is that respondents were more likely to say that they had gotten news on three or more topics and less likely to get news on one topic in the first survey than all other surveys. This may be due to respondents noticing that when they did choose multiple topics, they were prompted to choose which was their main topic – and the questions thereafter only referred to their main topic. To the extent that this may have occurred, the conclusions of this report are unaffected because our analysis is only focused on these main topics. Additionally, in the last survey, seeking out news and getting news about politics was higher than in most other surveys throughout the week. This is likely due to the fact that the last day of the study fell on Super Tuesday (March 1, 2016).
For the person-level analysis, the following table shows the unweighted sample sizes and the error attributable to sampling that would be expected at the 95% level of confidence for different groups in the survey:
Sample sizes and sampling errors for other subgroups are available upon request.
For the instance level analysis, it is not possible to report a single margin of sampling error because news instances are clustered within respondents. Because of this clustering, the margin of error is different for each question depending on the extent to which individual respondents tend to answer the same way across all of their instances. All statistical tests and estimates used to produce this report were performed using methods that accounted for the effect of this clustering.
In addition to sampling error, one should bear in mind that question wording and practical difficulties in conducting surveys can introduce error or bias into the findings of opinion polls.
The experiential study had a response rate of 55% (2,078 responses among 3,803 who were eligible and agreed to participate). Taking account of the combined, weighted response rate for the recruitment surveys (10.0%), attrition from panel members who were removed at their request or for inactivity, and agreement to participate in the experiential study, the cumulative response rate for the January ATP wave is 1.4%.9