The workshop on broadband metrics that is discussed here was held in June 2006, and in light of recent events, was either prescient or instrumental in helping to mobilize wider support for improving the state of our collective public knowledge of broadband networks. In May 2007, Commerce Committee Chairman Daniel Inouye (D-Hawaii), with a number of co-sponsors, introduced the Broadband Data Improvement Act (S.1492) that is designed to improve federal and state broadband data collection. According to Senator Inouye, “The first step in an improved broadband policy is ensuring that we have better data on which to build our efforts.”3 The Senate Commerce Committee reported the bill out of Committee for consideration by the full Senate in July. In October 2007, the House Subcommittee on Telecommunications and the Internet reported out the Broadband Census of America Act to improve data collection on high-speed internet availability.
There is no disagreement among technology-policy makers that broadband is a basic infrastructure that is critical to the health of our economy and social life. Infrastructure and services continue to evolve, with the continued growth in penetration of first generation DSL and cable modem services, the expanded availability of mobile 3G broadband and nomadic WiFi broadband, as well as fiber-to-the-home services. This creates the need for better data to track the progress and impacts of broadband service nationwide. In what follows, we describe the efforts of leading broadband researchers to provide a snapshot of the broadband data debate as it looked in June 2006.
Only ten years ago, it made sense to ask, who had internet access and who did not? Now we ask, how fast is your connection? And how fast is fast? The Federal Communications Commission (FCC) currently defines high speed service as greater than or equal to 200 Kilobits per second (Kbps) in one direction, a decision announced in agency’s first report on broadband deployment, required by the Telecommunications Act of 1996. The 200Kbps metric was selected for a number of reasons, including the desire to pick a data rate that would reflect a significant improvement in dial-up connections operating at 50Kbps and would exclude ISDN connections at 128Kbps. ISDN, in 1996, was generally available and marketed as an advanced service, but it was never widely adopted, in part because of high usage-sensitive pricing. The 200 Kbps metric would include most other emerging services commonly seen as high-speed internet access at the time.4
Today, broadband services offering peak download rates measured in several Megabits per second (Mbps) are common, and new offerings based on fiber-to-the-home are being deployed that are capable of supporting 10’s of Mbps data connectivity. Ten years ago, wireless internet connections were exotic; now they are an amenity at corner coffee shops, hotels, and terminals at major airports. There have also been significant in end-user equipment (e.g., routers supporting data rates in excess of 50Mbps are available for less than $50). In light of these developments, the FCC has begun to reconsider its data collection policies.
In addition to definitional worries about broadband, various stakeholders increasingly ask about broadband deployment: Why is broadband not adopted by some residents when it is available? How is broadband used by subscribers? Policymakers and community officials also inquire about the state of competition in broadband markets and provision of broadband from alternative service providers, including mobile broadband over 3G networks, fixed wireless broadband, broadband-via-power lines, and fiber-to-the-home deployments. There is also significant variation in broadband adoption in population sub-segments. Although 73% of American adults were internet users at the time of the workshop, there are still demographic groups and locales (mostly rural) where service options are non-existent or limited, and where usage rates are significantly below the national average. And despite recent rapid advances, especially in Africa and Latin America, there remains a global digital divide in both access and quality of service.
Origins of the workshop
At the Telecommunications Policy Research Conference in the fall of 2005, a group of experienced investigators who were probing the deployment of broadband service from different perspectives discovered they shared a frustration: They were finding that the data on which their respective analyses relied were flawed, limited, and in some instances inappropriate. This constrained the kinds of questions they sought to answer and biased findings that, in turn, could affect public policy decisions. The group of researchers felt that the public bureaucracies that collect data and generate statistics, which are widely used, are inherently conservative and slow to employ new methodologies that might provoke criticism.
The outcome of this conference encounter was a one-day invitation-only meeting in Washington the following June, sponsored jointly by the Pew Internet & American Life Project; the University of Texas at Austin, with support from the National Science Foundation; and the Massachusetts Institute of Technology. Sixty-five people participated as speakers, panelists and members of the audience in a program of prepared sessions and open panel discussions, allowing for lively exchanges.5 This essay describes issues raised by the speakers and participants and recommendations for going forward. Its focus is measurement, data, and the big questions that are important to formulating public policy and drive current research on broadband. The speakers looked primarily at broadband in the U.S., although several presentations provided international comparisons.