About the Sponsors
The Pew Internet Project is an initiative of the Pew Research Center, a nonprofit “fact tank” that provides information on the issues, attitudes and trends shaping America and the world. The Pew Internet Project explores the impact of the internet on children, families, communities, the work place, schools, health care and civic/political life. The Project is nonpartisan and takes no position on policy issues. Support for the Project is provided by The Pew Charitable Trusts. More information is available at www.pewresearch.org/internet
The California HealthCare Foundation is an independent philanthropy committed to improving the way health care is delivered and financed in California. By promoting innovations in care and broader access to information, our goal is to ensure that all Californians can get the care they need, when they need it, at a price they can afford. More information is available at www.chcf.org
Acknowledgments
The authors are grateful for the expertise provided by Princeton Survey Research Associates, particularly Evans Witt and Jennifer Su.
Veenu Aulakh and the California HealthCare Foundation provided not only monetary support, but inspiration and guidance throughout the project.
Alan Greene, Gilles Frydman, John Grohol, Sarah Greene, Teresa Graedon, Joshua Seidman, Ted Eytan, Fard Johnmar, and Victor Balaban reviewed survey drafts and did not complain (too loudly) when we had to cut some of their favorite questions.
Some of the best ideas for this research came from all those who write and comment on E-patients.net, The Health Care Blog, and Twitter. We look forward to continuing the conversation!
Methodology
This report is based on the findings of a daily tracking survey on Americans’ use of the Internet. The results in this report are based on data from telephone interviews conducted by Princeton Survey Research Associates between November 19 to December 20, 2008, among a national sample of 2,253 adults. For results based on the national sample, one can say with 95% confidence that the error attributable to sampling and other random effects is plus or minus 2.3 percentage points. For results based internet users (n=1,650), the margin of sampling error is plus or minus 2.7 percentage points. In addition to sampling error, question wording and practical difficulties in conducting telephone surveys may introduce some error or bias into the findings of opinion polls.
A combination of landline and cellular random digit dial (RDD) samples was used to represent all adults who have access to either a landline or cellular telephone. Both samples were provided by Survey Sampling International, LLC (SSI) according to PSRAI specifications. Numbers for the landline samples were selected using standard list-assisted RDD methods from active blocks (area code + exchange + two-digit block number) that contained three or more residential directory listings. The cellular samples were not list-assisted, but were drawn through a systematic sampling from dedicated wireless 100-blocks and shared service 100-blocks with no directory-listed landline numbers.
New sample was released daily and was kept in the field for at least five days. The sample was released in replicates, which are representative subsamples of the larger population. This ensures that complete call procedures were followed for the entire sample. At least 10 attempts were made to complete an interview at sampled households. The calls were staggered over times of day and days of the week to maximize the chances of making contact with a potential respondent. Each household received at least one daytime call in an attempt to find someone at home.
In each contacted household in the landline sample, interviewers asked to speak with the youngest male currently at home. If no male was available, interviewers asked to speak with the youngest female at home. This systematic respondent selection technique has been shown to produce samples that closely mirror the population in terms of age and gender. For the cellular sample, interviews were conducted with the person who answered the phone. Interviewers verified that the person was an adult and in a safe place before administering the survey. Cellular sample respondents were offered a post-paid cash incentive for their participation. All interviews completed on any given day were considered to be the final sample for that day.
Non-response in telephone interviews produces some known biases in survey-derived estimates because participation tends to vary for different subgroups of the population, and these subgroups are likely to also vary on questions of substantive interest. In order to compensate for these known biases, the sample data are weighted in analysis.
The sample was balanced to match population parameters for sex, age, education, race, Hispanic origin, region (U.S. Census definitions), population density, and telephone usage. The basic weighting parameters came from a special analysis of the Census Bureau’s 2007 Annual Social and Economic Supplement (ASEC). The population density parameter comes from 2000 Census data. The cell phone usage parameter came from an analysis of the July-December 2006 National Health Interview Survey.
The response rates for the land line sample were 21 percent. The response rates for the cellular sample were 25 percent.