• Future of the Internet

    The FCC’s Open Internet order: A cautionary tale for regulators

    The FCC’s decision to adopt utility-style regulation to the Internet is resulting in less investment and reduced deployment and it will inevitably lead to less robust competition in the broadband market, argues Brendan Carr, legal advisor to FCC Commiss [read more]
    byBrendan Carr | 12/Nov/20157 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    The FCC’s decision to adopt utility-style regulation to the Internet is resulting in less investment and reduced deployment and it will inevitably lead to less robust competition in the broadband market, argues Brendan Carr, legal advisor to FCC Commissioner Ajit Pai.

     

    The Digital Post: You suggested that the FCC decision to reclassify broadband as a utility could undermine the US telecom success story. What are the main negative consequences?

    Brendan Carr: The FCC’s decision to apply heavy-handed, utility-style regulation to the Internet is putting the U.S.’s success story at risk. It is already leading broadband providers to cut back on their investments and put off network upgrades that would have brought faster speeds and more reliable broadband to consumers.

    And the decision to put the U.S.’s success at risk was an entirely unnecessary one. In the 1990s, American policymakers decided on a bipartisan basis that the Internet should develop unfettered by government regulation.

    Regulators applied a light-touch regulatory framework that led to unparalleled levels of investment and, in turn, innovation.The private sector spent $1.3 trillion over the past 15 years to deploy broadband infrastructure in the U.S. That level of investment compares very favorably when you look at the International context.

    A study of 2011 and 2012 data shows that wireless providers in the U.S. invested twice as much per person as their counterparts in Europe ($110 per person compared to $55). And the story is the same on the wireline side, with U.S. providers investing more than twice those in Europe ($562 per household versus $244).

    Consumers benefited immensely from all of that investment. On the wireless side, 97% of Americans have access to three or more facilities-based providers. More than 98% of Americans now have access to 4G LTE. Network speeds are 30% faster in the U.S. than in Europe.

    The story is similar on the wireline side: 82% of Americans and 48% of rural Americans have access to 25 Mbps broadband speeds, but those figures are only 54% and 12% in Europe, according to a 2014 study that looked at 2011 and 2012 data. And in the U.S., broadband providers deploy fiber to the premises about twice as often as they do in Europe (23% versus 12%).

    Facilities-based intermodal competition is also thriving with telephone, cable, mobile, satellite, fixed wireless, and other Internet service providers competing vigorously against each other.

    But unfortunately, the U.S. is now putting all of this success at risk. At the beginning of 2015, the FCC decided to apply public-utility-style regulation to the Internet over the objections of two FCC Commissioners.

    I fear that we are already seeing the results of that decision. Capital expenditures by the largest wireline broadband providers plunged 12% in the first half of 2015, compared to the first half of 2014. The decline among all major broadband providers was 8%. This decrease represents billions of dollars in lost investment and tens of thousands of lost jobs.

    And the decline in broadband investment is not limited to the U.S.’s largest providers. Many of the nation’s smallest broadband providers have already cut back on their investments and deployment. Take KWISP Internet, a provider serving 475 customers in rural Illinois.

    KWISP told the Commission that, because of the agency’s decision to impose utility-style regulation, it was delaying network improvements that would have upgraded customers from 3 Mbps to 20 Mbps service and capacity upgrades that would have reduced congestion.

    These and many more examples all point to the same conclusion. The FCC’s decision to adopt heavy-handed Internet regulation is resulting in less investment and reduced deployment. It will inevitably lead to less robust competition in the broadband market and a worse experience for U.S. broadband users.

    But I am optimistic that the U.S. will ultimately return to the successful, light-touch approach to the Internet that spurred massive investments in our broadband infrastructure. Efforts are underway in both the courts and Congress to reverse the FCC’s decision. And following next year’s presidential election, the composition of the FCC could be substantially different than it is today.

     

    The Digital Post: What is your opinion about the Net Neutrality legislation due to be adopted by the EU? What are the main differences with the Open Internet order?

    Brendan Carr: I think the FCC’s decision to adopt utility-style regulation should serve as a cautionary tale for regulators that are examining this issue. FCC Commissioner Ajit Pai, who I work for, has described the FCC’s decision as a solution that won’t work to a problem that doesn’t exist.

    When the FCC acted, its rulemaking record was replete with evidence that utility-style regulation would slow investment and innovation in the broadband networks. And the evidence on the other side of the ledger? Non-existent.

    Net Neutrality activists have trotted out a parade of horribles and hypothesized harms, but there was no evidence whatsoever of systemic market failure. The FCC adopted utility-style regulations even though it presented no evidence that the Internet is broken or in need of increased government regulation.

    In the absence of any market failure, consumers are far better served by policies that promote competition. Utility-style regulation heads in the opposition direction—it imposes substantial new costs on broadband providers and makes it harder for competitors, particularly smaller broadband providers, to compete in the marketplace. After all, rules designed to regulate a monopoly will inevitably push the market toward a monopoly

     

    The Digital Post: Next year the European Commission will propose a major revision of the EU current framework on telecoms. From your perspective what should be the priorities?

    Brendan Carr: When I met with government officials and industry stakeholders in Brussels, one point kept coming up: the need to increase investment in Europe’s broadband markets. And I agree that embracing policies that will spur greater broadband investment is a key priority. According to a Boston Consulting Group report that just came out, Europe will need an additional €106 billion to meet its Digital Agenda goals.

    Historically, the U.S. embraced a number of policies that led to massive investments in broadband networks. For one, U.S. regulators embraced facilities-based competition. We rejected the notion that the broadband market was a natural monopoly.

    Therefore, we pursued policies that encouraged broadband providers to build their own networks, rather than using their competitors’ infrastructure. For example, we eliminated mandatory unbundling obligations, which were skewing investment decisions and deterring network construction.

    We also made it easier for facilities-based providers from previously distinct sectors to enter the broadband market and compete against each other.

    For instance, by making it easier for telephone companies to enter the video market and cable companies to enter the voice market, we strengthened the business case for those carriers to upgrade their networks, since offering a triple-play bundle of video, broadband, and voice was critical to being able to compete successfully. Because of these policies, capital flowed into networks, and consumers benefited from better, faster, and more reliable broadband infrastructure.

    We also took steps on the wireless side to promote investment and competition. We embraced a flexible use policy for wireless spectrum. Instead of mandating that a particular spectrum band be used with a specific type of wireless technology, the government left that choice to the private sector, which has a much better sense of consumer demand.

    This enabled wireless networks in the U.S. to evolve with technology and to do so much more quickly than if operators had to obtain government sign-off each step of the way. Having license terms and conditions that are relatively consistent across spectrum bands has also made it easier for providers to invest in the mobile broadband marketplace.

     

    The Digital Post: The EU is still grappling with a fragmented and somewhat rigid approach to spectrum, despite the efforts of the European Commission. What can Europe learn from the FCC policy on spectrum?

    Brendan Carr: The FCC’s spectrum policies have led to a tremendous amount of innovation and investment in our wireless networks. I would like to highlight a few of those here.

    First, the FCC has embraced a flexible use policy for wireless spectrum. Instead of mandating that a particular spectrum band be used with a specific type of wireless technology, the government left that choice to the private sector, which has a much better sense of consumer demand.

    This has enabled wireless networks in the U.S. to evolve with technology and to do so much more quickly than if operators had to obtain government sign-off each step of the way. For instance, nearly 50% of all mobile connections in the U.S. are now 4G, whereas that figure is only 10% worldwide.

    Second, the FCC makes spectrum bands available on a nationwide basis with relatively uniform license terms and build out obligations. So rather than auctioning licenses that cover only part of the country one year and then auctioning other licenses in another year, all of the licenses for a particular spectrum band are offered in the same auction.

    This approach gives broadband operators greater certainty and helps them plan their deployments while minimizing transaction costs. It also makes it easier for operators to obtain handsets and other equipment that will operate on their spectrum bands. All of that ultimately means that consumers get access to the spectrum faster and at lower costs.

    Third, the FCC tries to keep its eye on filling the spectrum pipeline. It takes years for new spectrum bands to be brought to market, and so waiting for consumer demand to increase before starting the process of allocating more spectrum for consumer use is not an efficient approach.

    The U.S. has engaged in a continuous process of reallocating spectrum for mobile broadband. We auctioned AWS-1 spectrum in 2006, 700 MHz spectrum in 2008, 65 MHz of mid-band spectrum earlier this year, and we’re set to auction our 600 MHz spectrum in 2016. To date, our spectrum auctions have over $91 billion for the U.S. Treasury.

    Fourth, the FCC has embraced policies that make it easier for operators to deploy their spectrum. One way we’ve done that is by adopting what the FCC calls “shot clocks.” These require state and local governments to act on an operator’s request to construct a new tower or add an antenna to an existing structure within a set period of time, say within 90 or 180 days.

    Another step the FCC has taken is to streamline the process of obtaining the historic preservation and other approvals that are required when an operator deploys broadband infrastructure. Combined, these actions have allowed spectrum to be deployed faster and have meant that consumers get quicker access to new mobile broadband offerings.

     

    photo credit: Eris Stassi
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Future of the Internet

    Obama, net neutrality and the “Trabant Syndrome”

    Rigid net neutrality rules risk becoming an ineffective remedy to a badly defined problem. That's why politicians should leave such a complex issue to technical, independent regulators. A restrictive approach would not foster innovation as many argue. U. [read more]
    byAndrea Renda | 17/Dec/20147 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    Rigid net neutrality rules risk becoming an ineffective remedy to a badly defined problem. That’s why politicians should leave such a complex issue to technical, independent regulators. A restrictive approach would not foster innovation as many argue.

    U.S. President Barack Obama’s recent statement in favor of net neutrality is a good example of why politicians should stay away from bold statements when dealing with complex issues. And indeed, net neutrality is so complex, technically, economically and politically, that no one has found the way to square the circle: the aggravating and confusing factor is that the word “neutrality” sounds appealing, whereas “diversity” and “discrimination” inevitably sound negative to politicians.

    This is why it is better to leave the hot potato to technical, independent regulators. President Obama certainly had good intentions: but there is reason to doubt that what he is advocating (putting unprecedented and ill-advised pressure on the FCC) would make users better off. Here’s why.

    [Tweet “The Internet is not neutral, and will never be. “]

    As often invoked by neutrality advocates, it was designed to guarantee end users against discrimination and usage limitations, and to allow no intrusion or inspection of files by any central “intelligence”.

    However, this is not what the Internet is today, and not only because of the recent scandals generated by massive surveillance by government authorities in many countries. Since the 2010 FCC Open Internet Order entered into force, the “information superhighway” has become populated by cars with different engines and many toll lanes, which allow different speeds.

    Companies such as Apple, Microsoft, Google, Netflix and may others make regular use of traffic acceleration services, either developed in-house or purchased from third parties such as Akamai, Limelight, Huawei, Level 3. This is why some services work better than others on the Internet: in a fully neutral network, this would not be possible.

    Mandating net neutrality for telcos and cablecos would not make the Internet neutral: the players that are able to either invest in their “content delivery networks” or purchase expensive services from third parties will still have a toll lane that others can’t afford.

    Second, “over the top” products and services such as search engines, wireless and cloud platforms are not (and should not be made) neutral. Giant wireless platforms such as Android, iOS, Windows give priority to certain apps over others, and even block certain (very few) applications. They carry their own default browsers and apps.

    As search engines Google, Yahoo! And Bing have to show some results first, and must do it in a way that match their users’ preferences; giant cloud providers such as Amazon and Microsoft sell their suites that include some favorite products, leaving others out or in second row.

    A neutral Internet would entail that all these companies refrain from customizing services for their end users: indeed, the European Commission seems to be lured by the sirens of “search neutrality” and “platform neutrality” in its antitrust investigation against Google. Would this be good or bad? Most likely, bad.

    Third, mandatory net neutrality would not foster innovation as many argue. A “mantra” of neutrality advocates is that net neutrality is the only guarantee that a “new Google” or a “new Facebook” will emerge in the future, just as these successful young companies have done in the past.

    But reality is different: try to name recent examples of successful start-ups, and see how many of them have emerged as new “apps” for existing platforms.

    This shows how the non-neutral world of Internet platforms is lowering, rather than raising, barriers to entry in the marketplace. The same is happening in the cloud: as companies compete to become the leading cloud provider, they have an incentive to host as many promising start-ups as possible on their platforms: this is why Internet hyper-giants do not initially charge start-ups for services such as sub-domains, enterprise tools, search engine optimization capacity, and access to content delivery networks.

    Based on the above, mandatory net neutrality risks becoming an ineffective remedy to a badly defined problem. If it is imposed only on telcos and cablecos, then the Internet will remain non-neutral as it is today, and competition for traffic acceleration services might even be reduced. But if neutrality is extended to search engines, operating systems, wireless platforms, then the Internet will die.

    This is why FCC Chairman Wheeler is rightly careful: the solution to the problem can only be cautious and, if anything, deferential to the extraordinary value that the non-neutral Internet is creating for our society every day. This does not mean that specialized services should be left entirely unregulated.

    To the contrary, they might well deserve careful monitoring, a good dose of technology to monitor quality of service, and sharpened competition rules.

    Most importantly, there is a need to avoid that the end-to-end Internet is cannibalized by one-way networks: otherwise, video will kill (also) the Internet star. A nuanced solution, based on the healthy co-existence of specialized services and best effort Internet, is the best suited to the ever-changing nature of the Internet: to the contrary, imposing neutrality would be tantamount to throwing out the (cyber-)baby with the bath water.

    [Tweet “The temptation to be resisted is praising neutrality as synonymous of freedom, democracy, openness.”]

    It is not. Full-fledged, rigid net neutrality rules are equivalent to what the Trabant was in Eastern Germany: the only car that people could have, very neutral, very bad, identical for everybody.

    It became famous in the Western world when the Berlin wall fell 25 years ago, and thousands of East Germans drove their Trabants over the border: once in the “free” world, they immediately abandoned their “neutral” cars, and started a new, non-neutral life.

    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark