The FCC’s decision to adopt utility-style regulation to the Internet is resulting in less investment and reduced deployment and it will inevitably lead to less robust competition in the broadband market, argues Brendan Carr, legal advisor to FCC Commissioner Ajit Pai.
The Digital Post: You suggested that the FCC decision to reclassify broadband as a utility could undermine the US telecom success story. What are the main negative consequences?
Brendan Carr: The FCC’s decision to apply heavy-handed, utility-style regulation to the Internet is putting the U.S.’s success story at risk. It is already leading broadband providers to cut back on their investments and put off network upgrades that would have brought faster speeds and more reliable broadband to consumers.
And the decision to put the U.S.’s success at risk was an entirely unnecessary one. In the 1990s, American policymakers decided on a bipartisan basis that the Internet should develop unfettered by government regulation.
Regulators applied a light-touch regulatory framework that led to unparalleled levels of investment and, in turn, innovation.The private sector spent $1.3 trillion over the past 15 years to deploy broadband infrastructure in the U.S. That level of investment compares very favorably when you look at the International context.
A study of 2011 and 2012 data shows that wireless providers in the U.S. invested twice as much per person as their counterparts in Europe ($110 per person compared to $55). And the story is the same on the wireline side, with U.S. providers investing more than twice those in Europe ($562 per household versus $244).
Consumers benefited immensely from all of that investment. On the wireless side, 97% of Americans have access to three or more facilities-based providers. More than 98% of Americans now have access to 4G LTE. Network speeds are 30% faster in the U.S. than in Europe.
The story is similar on the wireline side: 82% of Americans and 48% of rural Americans have access to 25 Mbps broadband speeds, but those figures are only 54% and 12% in Europe, according to a 2014 study that looked at 2011 and 2012 data. And in the U.S., broadband providers deploy fiber to the premises about twice as often as they do in Europe (23% versus 12%).
Facilities-based intermodal competition is also thriving with telephone, cable, mobile, satellite, fixed wireless, and other Internet service providers competing vigorously against each other.
But unfortunately, the U.S. is now putting all of this success at risk. At the beginning of 2015, the FCC decided to apply public-utility-style regulation to the Internet over the objections of two FCC Commissioners.
I fear that we are already seeing the results of that decision. Capital expenditures by the largest wireline broadband providers plunged 12% in the first half of 2015, compared to the first half of 2014. The decline among all major broadband providers was 8%. This decrease represents billions of dollars in lost investment and tens of thousands of lost jobs.
And the decline in broadband investment is not limited to the U.S.’s largest providers. Many of the nation’s smallest broadband providers have already cut back on their investments and deployment. Take KWISP Internet, a provider serving 475 customers in rural Illinois.
KWISP told the Commission that, because of the agency’s decision to impose utility-style regulation, it was delaying network improvements that would have upgraded customers from 3 Mbps to 20 Mbps service and capacity upgrades that would have reduced congestion.
These and many more examples all point to the same conclusion. The FCC’s decision to adopt heavy-handed Internet regulation is resulting in less investment and reduced deployment. It will inevitably lead to less robust competition in the broadband market and a worse experience for U.S. broadband users.
But I am optimistic that the U.S. will ultimately return to the successful, light-touch approach to the Internet that spurred massive investments in our broadband infrastructure. Efforts are underway in both the courts and Congress to reverse the FCC’s decision. And following next year’s presidential election, the composition of the FCC could be substantially different than it is today.
The Digital Post: What is your opinion about the Net Neutrality legislation due to be adopted by the EU? What are the main differences with the Open Internet order?
Brendan Carr: I think the FCC’s decision to adopt utility-style regulation should serve as a cautionary tale for regulators that are examining this issue. FCC Commissioner Ajit Pai, who I work for, has described the FCC’s decision as a solution that won’t work to a problem that doesn’t exist.
When the FCC acted, its rulemaking record was replete with evidence that utility-style regulation would slow investment and innovation in the broadband networks. And the evidence on the other side of the ledger? Non-existent.
Net Neutrality activists have trotted out a parade of horribles and hypothesized harms, but there was no evidence whatsoever of systemic market failure. The FCC adopted utility-style regulations even though it presented no evidence that the Internet is broken or in need of increased government regulation.
In the absence of any market failure, consumers are far better served by policies that promote competition. Utility-style regulation heads in the opposition direction—it imposes substantial new costs on broadband providers and makes it harder for competitors, particularly smaller broadband providers, to compete in the marketplace. After all, rules designed to regulate a monopoly will inevitably push the market toward a monopoly
The Digital Post: Next year the European Commission will propose a major revision of the EU current framework on telecoms. From your perspective what should be the priorities?
Brendan Carr: When I met with government officials and industry stakeholders in Brussels, one point kept coming up: the need to increase investment in Europe’s broadband markets. And I agree that embracing policies that will spur greater broadband investment is a key priority. According to a Boston Consulting Group report that just came out, Europe will need an additional €106 billion to meet its Digital Agenda goals.
Historically, the U.S. embraced a number of policies that led to massive investments in broadband networks. For one, U.S. regulators embraced facilities-based competition. We rejected the notion that the broadband market was a natural monopoly.
Therefore, we pursued policies that encouraged broadband providers to build their own networks, rather than using their competitors’ infrastructure. For example, we eliminated mandatory unbundling obligations, which were skewing investment decisions and deterring network construction.
We also made it easier for facilities-based providers from previously distinct sectors to enter the broadband market and compete against each other.
For instance, by making it easier for telephone companies to enter the video market and cable companies to enter the voice market, we strengthened the business case for those carriers to upgrade their networks, since offering a triple-play bundle of video, broadband, and voice was critical to being able to compete successfully. Because of these policies, capital flowed into networks, and consumers benefited from better, faster, and more reliable broadband infrastructure.
We also took steps on the wireless side to promote investment and competition. We embraced a flexible use policy for wireless spectrum. Instead of mandating that a particular spectrum band be used with a specific type of wireless technology, the government left that choice to the private sector, which has a much better sense of consumer demand.
This enabled wireless networks in the U.S. to evolve with technology and to do so much more quickly than if operators had to obtain government sign-off each step of the way. Having license terms and conditions that are relatively consistent across spectrum bands has also made it easier for providers to invest in the mobile broadband marketplace.
The Digital Post: The EU is still grappling with a fragmented and somewhat rigid approach to spectrum, despite the efforts of the European Commission. What can Europe learn from the FCC policy on spectrum?
Brendan Carr: The FCC’s spectrum policies have led to a tremendous amount of innovation and investment in our wireless networks. I would like to highlight a few of those here.
First, the FCC has embraced a flexible use policy for wireless spectrum. Instead of mandating that a particular spectrum band be used with a specific type of wireless technology, the government left that choice to the private sector, which has a much better sense of consumer demand.
This has enabled wireless networks in the U.S. to evolve with technology and to do so much more quickly than if operators had to obtain government sign-off each step of the way. For instance, nearly 50% of all mobile connections in the U.S. are now 4G, whereas that figure is only 10% worldwide.
Second, the FCC makes spectrum bands available on a nationwide basis with relatively uniform license terms and build out obligations. So rather than auctioning licenses that cover only part of the country one year and then auctioning other licenses in another year, all of the licenses for a particular spectrum band are offered in the same auction.
This approach gives broadband operators greater certainty and helps them plan their deployments while minimizing transaction costs. It also makes it easier for operators to obtain handsets and other equipment that will operate on their spectrum bands. All of that ultimately means that consumers get access to the spectrum faster and at lower costs.
Third, the FCC tries to keep its eye on filling the spectrum pipeline. It takes years for new spectrum bands to be brought to market, and so waiting for consumer demand to increase before starting the process of allocating more spectrum for consumer use is not an efficient approach.
The U.S. has engaged in a continuous process of reallocating spectrum for mobile broadband. We auctioned AWS-1 spectrum in 2006, 700 MHz spectrum in 2008, 65 MHz of mid-band spectrum earlier this year, and we’re set to auction our 600 MHz spectrum in 2016. To date, our spectrum auctions have over $91 billion for the U.S. Treasury.
Fourth, the FCC has embraced policies that make it easier for operators to deploy their spectrum. One way we’ve done that is by adopting what the FCC calls “shot clocks.” These require state and local governments to act on an operator’s request to construct a new tower or add an antenna to an existing structure within a set period of time, say within 90 or 180 days.
Another step the FCC has taken is to streamline the process of obtaining the historic preservation and other approvals that are required when an operator deploys broadband infrastructure. Combined, these actions have allowed spectrum to be deployed faster and have meant that consumers get quicker access to new mobile broadband offerings.
photo credit: Eris Stassi
The main reason why, back in 2013, Neelie Kroes felt compelled to put forward a set of EU-wide rules on net neutrality is that she could predict a wave of domestic regulations was bound to materialize sooner or later: a nightmarish scenario for a European commissioner who dreamed of building a telecom single market where “telcos can think European to compete globally”.
Alas, the kind of national fragmentation Mrs. Kroes ardently sought to prevent is about to be blessed by the EU law. Under a hard-fought settlement reached by the European Parliament and the Council in June, key elements of the first-ever EU bill on net neutrality will be adjudicated in the member states and their interpretation left up to national regulators. For example, domestic regulators will determine what is and what is not a “specialised service”.
This epilogue, which contradicts the original goal of the bill as it was conceived by Neelie Kroesand, has very little benefits for consumers and comes with a higher price. It may underdime the legal certainty and regulatory predictability (at EU level) that the telecoms sector badly needs to attract more investment. The truth is that a regime of “European net neutralities” is nearly as bad as not having a legislation at all.
The more nation states try to build radio spectrum customized policies on a country-by-country basis, the slower the auctions happen, the later consumers get LTE, says Christopher Yoo.
The DSM strategy is a huge opportunity for Europe, he stresses, but it requires a genuine commitment by member states towards opening their borders: in the Internet economy refusing change is not an option and if you protect your domestic economy you’ll simply be left behind.
Europeans have to make sure that they do not cave in to people who oppose increased competition stemming from creating a pan-European digital market across borders, adds Professor Yoo. This change can be very disruptive but it will ultimately yield tremendous benefits.
photo credits: drew baker
Is there a single reason why we need national telecoms law in the digital age in the European Union? I am struggling to think of one. I’m not saying a universal telecoms utopia is about to descend on us. I am simply putting it out there, that there’s no use for national law in this area anymore.
I can already here the “buts” screaming at me through the interweb. Remember that the starting assumption of most individuals and like-minded groups is that they are unique, or have unique needs. This applies across all forms of thinking and human activity. Thinking you or your country is unique is about the least unique thing you can do.
From a consumer perspective, we need to ask why the most borderless service and content category – the online world – is the one with the most national regulation.
Why does a consumer in Austria need more rights to change phone contracts than a citizen in Slovenia? Why does a Belgian need 141 times more protection from 4G radiation than a citizen in France? .
In a connected and data-powered world we can see relatively quickly and easily what works and what doesn’t. Policy is no exception, and it’s quite clear that 28 different approaches to telecoms don’t work.
Achieving uniformity would come at too high a political cost; but the cost of pretending there is a policy benefit to every European country doing their own thing is higher.
Jealously guarding the pet ideas and projects of whichever mid-level policy makers have cornered the geeky digital fields for themselves in a given country is not the way to make good policy.
Think about the example of the so-called “Universal Service Obligation” which imposes on incumbent telecoms companies certain levels of service guarantee. In the countries that want it. To the extent they feel like it. Or felt like it when they last discussed it. A decade ago. Seriously?
Here’s my question to people who think it should exist: what’s universal about it, if every country gets to decide whether to have it and what the parameters are?
Aside from the fact that it’s hard for a government to keep up to date with what people want and need in terms of internet access, it’s all just so pointless when a Universal Service Obligation is neither universal nor obligatory.
From a rural broadband roll-out perspective, let’s look beyond the failed effort to use European Investment Bank to fill rural gaps, as the Connecting Europe Facility proposed.
[Tweet “We can abandon failed national practices and laws without impinging on national sovereignty…”]
…simply by spreading the best policy models.
What is there to gain from following the Italian model, where virtually no-one has fast broadband outside of cities? Nothing.
What is there to gain from the Swedish model, where virtually everyone does?A great deal.
Letting countries choose to fail out of deference to traditions of national policy failure is ridiculous. This has nothing to do with threatening a country’s identity or way of life (unless you count poverty and isolation as a way of life) and everything to do with common sense.
From a business perspective let’s look at mobile roaming charges. In popular debate we hear about holiday-makers getting shock bills or being forced to turn off their devices.
In reality the people and economic activity affected most by roaming charges are businesses and business travellers. Job creation, exports, and start-ups are not helped by roaming charges.
These are arbitrary charges introduced in the 1990s (and not from the get-go: the original system was just a 33% mark-up on your domestic bill when you travelled), based on non-existent extra costs.
It costs supermarket chains more deliver goods to stores without truck parking spaces than it does for telecoms operators to connect you while abroad. And you don’t see that supermarkets charging a 2000% mark-up in response to their traffic problem. Why should telcos?
Stupid systems like this hurt the people who do most to grow our economy, and they hurt blameless victims like people who live in border zones. And that’s before considering the anger and confusion imposed on ordinary retail customers.
There is no justification for them on market or political terms. They are simply logically inconsistent to the over-riding principle of the single market. No national law can possibly trump that set of arguments. And yet we hear endlessly about how hard abolishing roaming fees will be for telcos in holiday destinations like Portugal and France.
Yes, how terrible for those telcos having to cope with people paying them to send their holiday photos home in August. Next time you see a telco CEO: remember he is like a starving child – only he suffers more – because the fat from his roaming cash cow is at risk.
Then there’s the mother of all stakeholder issues: net neutrality. We cannot seem to agree on a working definition of the concept to conduct a public debate. Given that, is 28 parallel national debates about the same universal network that we all depend on really the way to achieve public good? The only ways forward are European and global debates.
Then there’s the mother of all geek issues: spectrum. If you look at chart of how spectrum is allocated in Europe it looks like a university student threw up on a sheet of paper after a night out drinking cheap spirits and pizza. It’s so mangled and messed up that it’s clear no national set of decision-makers could possibly untwist it all.
It’s rare to even get enough brains in a room to make sense of it, let alone fix it. The relevance to the issue of national vs EU law is that there’s a difference between saying every country has the right to reserve certain amounts of spectrum for military use (fair enough) and other distortions.
There is no reason to leave TV stations with existing spectrum just because they existed before mobile companies did.
There is no sense in greedy national treasuries conducting blood-sucking auctions – because all they do is delay new service roll-out and the extra tax receipts that come with it.
There is no reason for spectrum to be allocated to pagers (remember them on the doctor’s waist in the 1980s) and taxi radio systems (what’s an app, guys?).
Again, let countries reserve spectrum for the military, and then let a common European system apply to the rest. If we don’t, you can look forward to your mobile phone dropping out even more often sometime in the mid-future.
Even when it is not legally or political possibly to apply European or global solutions, it is still fundamentally necessary to have European debates. Because if there is one thing we have learnt from the history of telecoms – from undersea cables to internet to the GSM standard to the rise of Vodafone to roaming price caps – it is that cross-border action has the best impacts.
photo credits: János Balázs
The European telecom sector is faced with significant challenges in terms of rapidly emerging new technologies and new forms of competition and business models driven by these technology changes. 2015 will be a pivotal year for European policy makers, regulators and competition watchdogs to improve the environment for the European telecom sector.
Regulation is the single most important driver in the telecoms sector. HSBC’s Global Regulatory Heatmap report aims to take the regulatory temperature globally and to identify those countries where regulation is most and least supportive of investment, and then to assess how the world’s largest operators are exposed to these conditions.
The report shows, that the European region has faced the harshest regulation, although there are now encouraging indications the environment here is starting to improve.
The telecom sector is widely regarded as an enabler of innovation, productivity growth and international work sharing in the context of an increasingly competitive and globalised economy.
Academic research indicates that economic progress in any given country is driven less by the mere arrival of new technologies, and more by the speed, breadth and depth of their adoption.
Consequently, it is tremendously important that network operators invest heavily so as to ensure that the latest telecoms technologies are available on as ubiquitous a basis as possible.
Network investment is important for another reason also: as set out in HSBC’s Supercollider report, it can be clearly demonstrated that the primary driver of lower prices in telecoms is CAPEX.
In deploying more of the most modern systems, operators take advantage of new technology that is the basis for innovation, capable of handling traffic with greater efficiency, and thus at lower unit cost.
Lowering unit costs and prices should be a primary policy goal, as it is this that enables the development and adoption of novel applications and contributes towards productivity growth in the broader economy. However it has to be mentioned that there is a Babylonian confusion in the public debate on “price” meaning either the monthly bill or the price per unit (MB, text, minute voice).
This obviously raises the question of what might induce operators to raise their CAPEX, and here the empirical evidence is plain. The most effective driver of higher network spending is higher EBITDA margin for MNOs and competition, as this gives MNOs both the means and the incentive to invest.
The central challenge facing regulatory policy makers is therefore how to best to secure a benign investment environment, in which healthy margins support heavy CAPEX. To this end it has to be kept in mind, that regulation is the single most important driver for securing EBITDA margins.
[Tweet “However, Europe’s regulatory framework lacks incentives to invest and shows signs of obsolescence”].
Therefore it should be fundamentally overhauled in the course of the next framework review process.
To name only two examples of obsolescence:
(1) The current framework is too narrowly focused on legacy apps, still centered around traditional voice telephony, text messages and broadcast TV, now just legacy applications of a much bigger space: i.e. the digital services.
(2) The current framework is inherently slow, leading to inappropriate multi-year/multi-step iterative procedures that fail to keep pace with market and technology evolution.
Along the future regulatory trajectory in Europe, there are a series of challenging issues and required steps for regulatory modernization to be dealt with.
When defining the trajectory of regulatory modernisation, Europe should avoid going for incremental improvement and rather aim at an ambitious scenario and step in a “Virtuous Circle”, based on innovation, investment and smart regulation (“Regulation 2.0”).
The following listing of issues is not comprehensive; the order of listing does not indicate priorities:
(1) From traditional telco services to internet-based ecosystems – SMP regulation: In the telecom legacy world, telcos acted as gatekeepers aiming at monetizing single products or integrated value chains. Now, ecosystems controlled by the internet giants (OTT players or ‘edge providers’) are the new competitive engines that capture and deliver value.
These competitors were never foreseen by regulators, and yet have amassed customer bases that dwarf those of even the largest telecoms companies (for example the merged Facebook – WhatsApp conglomerate).
The emergence of such powerful forces does prompt the question of whether conventional regulation of the sector, stratifying the industry between incumbents obliged to provide wholesale capacity and resellers able to obtain this capacity on favourable wholesale terms, still remains appropriate.
(2) New regulatory bottlenecks: driven by the changes described earlier, traditional regulatory bottlenecks, like access to infrastructure will become less important or even become obsolete whereas access to the huge data collection and processing capabilities of the OTTs will be (or is already!) a crucial bottleneck for all players in the digital services sector to be dealt with.
(3) Market definitions: Will the current set of ‘recommended markets’ (even the most recent revised list of recommended markets) be a future proof instrument for regulators and competition authorities?
Looking at the Facebook – WhatsApp merger, access to data collection and processing, the ‘machine room’ of the internet giants seems to be a relevant topic for market definitions.
[Tweet “Market definitions need to get broader, more flexible and include OTT”].
They may also be either extended beyond national borders, or defined at a sub-national level.
In other words, will we continue to differentiate among separated vertical markets and spend considerable time and resources for their definitions and updates, whilst cross-subsidized business models exploit and profit shifts?
(4) Definition of services and categories, SMP-based regulation: The names and definitions of the current regulatory categories of “Information Society Services” and of “Electronic Communications Services” used in the European regulation, have become obsolete. The obligations associated with these categories should be reorganized as well as the legal instruments to be used.
Luisa Rossi (Orange) recently pointed out that, “…the old rules are no longer adequate and yet still apply, while new issues are not addressed and require action. This is why it is now important for the legislative framework and regulatory practices to embrace this phase of development…The starting point for the reforms should be the creation of a digital services categorywith the reclassification of traditional communication services, followed by the reorganisation of the associated obligations such as transparency and non-discrimination, security, privacy, data retention, emergency services, interoperability and portability. Hence, digital services would be subject to a common set of rules enshrined in a new horizontal European legislation, whichever the provider or the technology used. Such an approach should be preferred to sector specific rules.“
(5) Reciprocal regulation: Corporates from neighbouring areas of the economy such as payTV – to give just one example – have the right to purchase telecoms infrastructure, without there being the reciprocal right of telecoms operators to purchase exclusive media content on a similar basis.
(6) Preference for investments in infrastructure: While it may be desirable to address bottlenecks, such as in the access to fixed-line or even mobile infrastructure (via measures such as unbundling and MVNOs respectively), is it really desirable that those reselling the capacity should have an advantage over those building it? The consequence of this tilt to the competitive landscape will be that less infrastructure is deployed.
(7) Modernizing Competition Policy: DG COMPETITION statements referring to Austrian consolidation discuss the lack of an entrant as if it was a failure – on the contrary, it was a successful experiment to determine whether there was economic viability for a fourth player, and the answer was clear: there was not.
A negative outcome from an experiment is not a failure (Karl Popper on ‘Falsifiability’). Let dynamic efficiency gains work! Current competition policy and practice obviously overlooks that a growing part of the entire digital services market (OTTs) is completely unregulated and the remainder – much smaller part of the digital services sector – is strongly micro-regulated.
A recent set of papers by Papai and Csorba casts doubt on the assumption that introducing more mobile competitors into a national market is necessarily better for consumers. Forget the mantra of the crucial importance of the famous ‘forth player’, there is no special magic in the number “four”!
(8) Regulator’s dilemma and challenges: Legislators and regulators have two principal choices in this debate; full de-regulation or continued (selective) regulation. Key issues are: is regulation really capable of specifying how markets should function?
Most would concede this is something that is easier to achieve in industries subject to a slower pace of technological change and disruption, such as utility businesses.
By contrast, in telecoms the scope for disruptive technologies to transform the industry (for example, mobile, WiFi, voice over IP, OTTs, etc.) makes the system far more chaotic.
Given this inherent unpredictability, one group argues that there is an argument for allowing the market to take its natural course and that the competitive dynamics of an industry subject to Moore’s Law are perfectly sufficient.
On the other hand, those who do wish to see continued regulatory intervention argue that the question is rather how better to identify those areas that would benefit from it.
The so-called ‘three criteria test’ remains the preferred yardstick (the presence of sustained barriers to entry, the absence of effective competition, and the inadequacy of existing competition law to deal with the issue).
In any case,
[Tweet “regulators should do their utmost to stay updated with leading edge developments, technologies “]
and innovations on a global level to understand better the markets on the move and to base their decisions on these insights.
(9) Spectrum Policy: Europe’s method of allocating spectrum is one of the least harmonised and least efficient on a global scale.
Most industry parties agree that there is an urgent need to harmonise this process, in terms of awarding methods, coordinating the timing of awards, the duration of usage rights and the conditions on which spectrum can be traded.
One possibility could be the creation of pan-European licences. However, any such proposals (as per those in the “Connected Continent” (or “Telecom-Single-Market”) proposal seem bound to raise concerns amongst the member states.
Stronger instruments for the harmonization of timetables and awarding methods, license durations when assigning new spectrum are urgently needed.
In particular, Europe should be quick and harmonized in allocating and assigning spectrum for mobile broadband in the 700MHz band (2nd digital dividend).
Measures should include: (1) No – or significantly higher – spectrum caps, (2) Perpetual usage rights with ‘use it or lose it’ rule imposed, (3) Fostering secondary spectrum market.
(10) Net Neutrality: The EU legislation on net neutrality should allow operator innovation with specialized services, which will be a key for 5G, subject to transparency and other appropriate safeguards. This is also a question relevant for the competitiveness of the European industry.
If the US will be allowing for the equivalent of specialized services in the future (which is an open issue for the time being), so if EU operators are not able to innovate with specialized services, such innovations will likely happen outside the EU, in places like the US.
An example would be the Connected Continent proposals on net neutrality, which initially amounted to a very judicious compromise in the eyes of many industry experts, but which was subsequently heavily modified in the European Parliament.
(11) Change process in EU: Perpetual regulatory intervention tends to necessitate more and complex legislation, and the legislative process is itself fraught with risk, since there may be a tendency for positive proposals to be diluted or even reversed when these highly complex topics are debated.
(12) Benefits of scale: Scale already plays an important role within the industry, and in future there will probably be opportunities to extend scale effects still further: for instance, as platforms standardise around IP technology, greater cross-border synergies should become feasible.
This is welcome, since many recent regulatory reforms (such as with regard to termination rates and roaming charges) have arguably reduced the incentive for cross-border consolidation.
(13) License to fail: In a dynamic and competitive market, there will, by necessity, be companies that fail. Indeed, the very fact that there are losers actually indicates the success of competition.
However, European regulation has often shied away from recognising this. For example, it has been particularly difficult to use the ‘failing firm’ defence to justify a merger – including in those cases where financial investors would have concluded that the target company could not sustain the level of network and customer investment required to be able to compete effectively.
Even those industry observers looking for regulatory reform rather than programmatic de-regulation agree that the consolidation of smaller players (thereby creating stronger entities with margins better able to support investment) would be a powerful positive – hence the widespread support for four to three in-country mobile consolidation.
In conclusion, there is much agreement amongst industry experts that the current regulatory framework in Europe shows clear signs of obsolescence and should be without further delay fundamentally overhauled.
All sides call for a more cohesive approach, and the formation of a coherent industrial policy for the telecoms sector, so that it is better able to compete against its global rivals – in terms of investment ability, innovation adoption, network capability and attractive unit pricing.
This post was originally published on www.serentschy.com
Rigid net neutrality rules risk becoming an ineffective remedy to a badly defined problem. That’s why politicians should leave such a complex issue to technical, independent regulators. A restrictive approach would not foster innovation as many argue.
U.S. President Barack Obama’s recent statement in favor of net neutrality is a good example of why politicians should stay away from bold statements when dealing with complex issues. And indeed, net neutrality is so complex, technically, economically and politically, that no one has found the way to square the circle: the aggravating and confusing factor is that the word “neutrality” sounds appealing, whereas “diversity” and “discrimination” inevitably sound negative to politicians.
This is why it is better to leave the hot potato to technical, independent regulators. President Obama certainly had good intentions: but there is reason to doubt that what he is advocating (putting unprecedented and ill-advised pressure on the FCC) would make users better off. Here’s why.
[Tweet “The Internet is not neutral, and will never be. “]
As often invoked by neutrality advocates, it was designed to guarantee end users against discrimination and usage limitations, and to allow no intrusion or inspection of files by any central “intelligence”.
However, this is not what the Internet is today, and not only because of the recent scandals generated by massive surveillance by government authorities in many countries. Since the 2010 FCC Open Internet Order entered into force, the “information superhighway” has become populated by cars with different engines and many toll lanes, which allow different speeds.
Companies such as Apple, Microsoft, Google, Netflix and may others make regular use of traffic acceleration services, either developed in-house or purchased from third parties such as Akamai, Limelight, Huawei, Level 3. This is why some services work better than others on the Internet: in a fully neutral network, this would not be possible.
Mandating net neutrality for telcos and cablecos would not make the Internet neutral: the players that are able to either invest in their “content delivery networks” or purchase expensive services from third parties will still have a toll lane that others can’t afford.
Second, “over the top” products and services such as search engines, wireless and cloud platforms are not (and should not be made) neutral. Giant wireless platforms such as Android, iOS, Windows give priority to certain apps over others, and even block certain (very few) applications. They carry their own default browsers and apps.
As search engines Google, Yahoo! And Bing have to show some results first, and must do it in a way that match their users’ preferences; giant cloud providers such as Amazon and Microsoft sell their suites that include some favorite products, leaving others out or in second row.
A neutral Internet would entail that all these companies refrain from customizing services for their end users: indeed, the European Commission seems to be lured by the sirens of “search neutrality” and “platform neutrality” in its antitrust investigation against Google. Would this be good or bad? Most likely, bad.
Third, mandatory net neutrality would not foster innovation as many argue. A “mantra” of neutrality advocates is that net neutrality is the only guarantee that a “new Google” or a “new Facebook” will emerge in the future, just as these successful young companies have done in the past.
But reality is different: try to name recent examples of successful start-ups, and see how many of them have emerged as new “apps” for existing platforms.
This shows how the non-neutral world of Internet platforms is lowering, rather than raising, barriers to entry in the marketplace. The same is happening in the cloud: as companies compete to become the leading cloud provider, they have an incentive to host as many promising start-ups as possible on their platforms: this is why Internet hyper-giants do not initially charge start-ups for services such as sub-domains, enterprise tools, search engine optimization capacity, and access to content delivery networks.
Based on the above, mandatory net neutrality risks becoming an ineffective remedy to a badly defined problem. If it is imposed only on telcos and cablecos, then the Internet will remain non-neutral as it is today, and competition for traffic acceleration services might even be reduced. But if neutrality is extended to search engines, operating systems, wireless platforms, then the Internet will die.
This is why FCC Chairman Wheeler is rightly careful: the solution to the problem can only be cautious and, if anything, deferential to the extraordinary value that the non-neutral Internet is creating for our society every day. This does not mean that specialized services should be left entirely unregulated.
To the contrary, they might well deserve careful monitoring, a good dose of technology to monitor quality of service, and sharpened competition rules.
Most importantly, there is a need to avoid that the end-to-end Internet is cannibalized by one-way networks: otherwise, video will kill (also) the Internet star. A nuanced solution, based on the healthy co-existence of specialized services and best effort Internet, is the best suited to the ever-changing nature of the Internet: to the contrary, imposing neutrality would be tantamount to throwing out the (cyber-)baby with the bath water.
[Tweet “The temptation to be resisted is praising neutrality as synonymous of freedom, democracy, openness.”]
It is not. Full-fledged, rigid net neutrality rules are equivalent to what the Trabant was in Eastern Germany: the only car that people could have, very neutral, very bad, identical for everybody.
It became famous in the Western world when the Berlin wall fell 25 years ago, and thousands of East Germans drove their Trabants over the border: once in the “free” world, they immediately abandoned their “neutral” cars, and started a new, non-neutral life.