The 10-year extension of the IGF mandate is a testament to the IGF’s significant evolution over the past decade as the leading global forum for dialogue on Internet governance issues. What’s next?
As the Internet continues to evolve at breakneck speed, many critical issues still need to be addressed. A major area is Internet governance. This broad realm encompasses both governance of the Internet (essentially the business of ICANN and other technical organizations) and governance on the Internet (a range of issues affecting services and content, such as privacy and cybersecurity).
Last year, the U.N. General Assembly approved the renewal of the mandate of the Internet Governance Forum (IGF) for another 10 years. Born in Tunis at the end of the second phase of the World Summit on the Information Society (WSIS), the IGF has evolved to serve as the leading global forum for dialogue on Internet governance issues. Since the first forum in 2016, the IGF has been an annual event.
The IGF now gathers a growing number of experts from academia, civil society, governments, industry and the technical community. Traditional topics of Internet governance involve setting rules, standards, policies and providing technical support so that the world can be connected on the global Internet. Going beyond the technical issues, the IGF also deals with complex social, economic and transnational issues related to the use of the Internet.
Getting to where we are today has been both a challenging and rewarding journey that is still in progress.
The IGF has gone through times of skepticism about both its continued existence and its ability to fulfill its mandate. Over time, the IGF has gradually expanded beyond its narrow circle as a “discussion only” forum to include processes that can produce tangible and useful outcomes, seen in the Best Practices Forums (BPF) and the Dynamic Coalitions. The 10-year extension of the IGF mandate is a testament to the IGF’s significant evolution over the past decade.
Earlier this month, over 2000 participants from 83 countries came together in Guadalajara, Mexico, with hundreds more participating remotely, to attend the 11th IGF meeting, the first since the mandate’s renewal.
As in previous years, ICANN’s Board directors, community leaders and senior staff attended the IGF. But unlike past years, the role of ICANN and the Internet Assigned Numbers Authority (IANA) functions did not take center stage in Guadalajara.
This is thanks to the Internet community that worked hard over the past few years to finalize the transition of the U.S. Government’s stewardship of the IANA functions to the global multistakeholder community. Instead, this year’s debate was focused on lessons learned from the IANA transition as a recent and successful example of a multistakeholder process in action.
With over 200 sessions, the 2016 IGF agenda covered the standard topics of Internet governance such as access, diversity, privacy and cybersecurity; plus more current issues related to online trade, the Internet of Things (IoT) and the U.N.’s Sustainable Development Goals (SDGs).
Links between SDGs, Internet governance and the IGF figured strongly on the agenda, with a main session and several other workshops organized on this topic. A common sentiment this year was that the IGF should focus more on the SDGs; a stance that was conveyed clearly during the “Taking Stock” session on the last day.
ICANN’s participation at IGF 2016 was led by CEO Göran Marby and Board Chair Stephen Crocker. Primary objectives were to emphasize the successful IANA stewardship transition as an example of how ICANN’s multistakeholder processes work, and to encourage participation in the ongoing work of ICANN’s Supporting Organizations and Advisory Committees.
ICANN’s goals are to continue supporting the multistakeholder model in Internet governance and contributing to global policy discourse with all interested parties – activities that are within ICANN’s mission and scope.
On the day before the event, ICANN organized a town hall session to reflect on the evolution of ICANN’s multistakeholder processes using the IANA stewardship transition as a case study. Presenters sought views from participants on their experiences with ICANN and how they envisage the challenges ahead.
In addition, ICANN community and organization staff planned and conducted workshops and roundtable discussions on a variety of topics such as the IANA transition, the new generic top-level domain (gTLD) Program, the role of noncommercial users in ICANN, law enforcement in the online world, and Asia and the next billion Internet users.
So, what’s next?
Geneva will host the 2017 IGF next December, and already discussions about strategic focus are underway. Holding the event in Geneva, the second home of the U.N. and to 192 government missions, may boost the participation of governments from developing countries and of non-U.S. businesses, both issues at Guadalajara.
The IGF Multistakeholder Advisory Group (MAG) will meet early in the new year to determine the focus for the 2017 IGF. At the top of the agenda will be how to deal with the call made by many in Guadalajara for more attention to meeting the targets of the SDGs. No doubt, the MAG may want to concentrate on other issues like human rights and global trade accords.
After all, the U.N. Human Rights Council meets in Geneva, and the World Trade Organization (WTO) is based there. Once the focus is set, preparatory work can begin for the Geneva IGF.
All in all, 2017 will be an interesting year in furthering key goals in Internet governance.
Picture credits: kvitlauk
The European Commission has recently launched an initiative on the Next Generation Internet, aiming at looking into the Internet of the future, its opportunities as well as challenges. Jesus Villasante from DG Connect explains what to expect from this initiative.
The Digital Post: What are the main goals of the initiative? What is it about?
Jesus Villasante: The Internet has become essential in many aspects of our daily life, for work, education and leisure. The future Internet will be even more pervasive, working with and through many different devices and sensors, and will present completely new functions and characteristics. We have launched the Next Generation Internet Initiative because we believe it is the right time to take a fresh look, with a broad and inclusive perspective, involving from the beginning the various stakeholders: from research, technical and business communities to citizens and civil society.
To help us establishing an initiative which has an impact on the evolution of the Internet, a number of preparatory measures have started:
– an open consultation where people can tell us what they expect from the Internet of the future, running until 9 January 2017
– in order to back up the consultation and provide additional information, we have created an open space for conversations, for additional information, background documents and other materials. This is also where we will launch additional discussions on those topics that raise most interest in the consultation, giving people the opportunity to provide more detailed contributions at a later stage.
– a call for support actions has just been launched in the Horizon 2020 research programme (objective ICT-41). The aim is to identify specific research topics and to create an ecosystem of relevant stakeholders.
TDP: What are the main concerns regarding the future of the Internet?
JV: The Internet becomes more and more important for people and for every economic or societal activity. It creates new business opportunities and new ways for social interaction, from the local to the global scale. Many Internet developments have surpassed any expectations in terms of benefits for citizens and economy. And yet, there are some reasons for concern about further progress. For example citizens lack of control on their own personal data or restrictions on Internet access because of geographical, economic or cultural reasons. These are areas that we need to work on and improve the current situation.
TDP: What are the further opportunities and benefits it could bring?
JV: The future Internet should overcome the shortcomings of today’s Internet. It should provide better services, allow for greater involvement and stimulate participation of people in areas such as public life and decision-making. Only if the future Internet is designed for humans it can meet its full potential for society and economy.
Just an example: today, many Europeans are still reluctant to do their financial transactions online. Fraud, data skimming or other security pitfalls make them hesitate. The Next Generation Internet Initiative should take a fresh look at this type of issues and offer new and reliable technological solutions. It should be designed for people, so that it can meet its full potential for society and economy and reflect the social and ethical values that we enjoy in our societies.
TDP: What is the right approach the EU should take to shape the developments of the Net and not being left behind?
JV: There are three crucial aspects:
First of all, the scope of the Next Generation Internet Initiative should be multi-disciplinary. This means we should address various technological questions and topics, ranging from interoperability to broadband. Also, we need to use more the various technological opportunities arising from advances in research fields such as network architectures, software-defined infrastructures and augmented reality.
Secondly, I think that whatever approach the EU takes, it needs to reflect the European social and ethical values: free, open and more interoperable, yet respecting privacy. Only when we are able to reflect these values on the Net, the future Internet can release its full potential and provide better services, more intelligence, greater involvement and participation.
Last but not least, we should get more people on board for this initiative. There are 615 million Internet users in Europe and many more worldwide which need to have a say in this. The shape of the Next Generation Internet Initiative is not decided behind closed doors, on the contrary: we want to reach out to the brilliant minds with excellent ideas. It is them and that community that can help us to move forward with this ambitious initiative. Of course the evolution of the Internet will be a global endeavour, but Europe shall make a decisive contribution for a better Internet.
Picture credits: Salvatore Vastano
With the rollover of the Root Zone Key Signing Key (KSK) ICANN is marking another important step to improve the security of the Domain Name System, i.e. Internet’s address book. Here’s the details.
ICANN today posted plans to update or “roll” the Root Zone Key Signing Key (KSK), marking another significant step in our ongoing efforts aimed at improving the security of the Domain Name System (DNS).
The KSK rollover plans were developed by the Root Zone Management Partners: ICANN in its role as the IANA Functions Operator, Verisign acting as the Root Zone Maintainer, and the U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA) as the Root Zone Administrator. The plans incorporate the March 2016 recommendations of the Root Zone KSK Rollover Design Team, after it sought and considered public comment on a proposed rollover process.
What is the KSK?
The KSK is a cryptographic public-private key pair that plays an important role in the Domain Name System Security Extensions (DNSSEC) protocol.
The public portion of the key pair serves as the trusted starting point for DNSSEC validation, similar to how the root zone serves as the starting point for DNS resolution.
The private portion of the KSK is used during the Root KSK Ceremonies to sign the Zone Signing Keys used by Verisign to DNSSEC-sign the root zone.
Why Roll the KSK?
Good security hygiene recommends passwords be changed periodically to reduce the risk that those passwords can be compromised via brute force attacks.
As with passwords, the community has informed us that the cryptographic keys used to sign the root zone should be periodically changed to help maintain the integrity of the infrastructure that depends on those keys and ensure that security best practices are followed.
The KSK Rollover Process
Rolling the KSK involves creating a new cryptographic key pair that will be used in the DNSSEC validation process to verify that responses to queries for names in the root zone (typically TLDs) have not been altered in transit.
Transitioning to that new key pair and retiring the current key pair is also part of the rollover process. Internet service providers, enterprise network operators, and others who have enabled DNSSEC validation must update their systems with the new public part of the KSK, known as the root’s “trust anchor.”
Failure to do so will mean DNSSEC-enabled validators won’t be able to verify that DNS responses have not been tampered with, meaning those DNSSEC-validating resolvers will return an error response to all queries.
Because this is the first time the root’s KSK key pair will be changed since it was generated in 2010, a coordinated effort is required across many in the Internet community to successfully ensure all relevant parties have the new public portion of the KSK and are aware of the key roll event.
ICANN will be discussing the KSK rollover at various technical fora and using the hashtag #KeyRoll to aggregate content, provide updates, and address inquiries on social media. We have also created a special online resource page to keep people up to date with key roll activities.
If the KSK rollover is smoothly completed, there will be no visible change for the end user. But as with pretty much any change on the Internet, there is a small chance that some software or systems will not be able to gracefully handle the changes.
If complications become widespread, the Root Zone Management Partners may decide that the key roll needs to be reversed so the system can be brought back to a stable state. We have developed detailed plans that will enable us to back out of the key roll in such a circumstance.
The KSK rollover will take place in eight phases, which are expected to take about two years. The first phase is scheduled to begin in Q4 of 2016.
Developers of software supporting DNSSEC validation should ensure their product supports RFC 5011. If their products do, then the KSK will be updated automatically at the appropriate time.
For software that does not conform to RFC 5011, or for software which is not configured to use it, the new trust anchor file can be manually updated.
This file will be available here and should be retrieved before the resolver starts up and after the KSK is changed in the DNSKEY Resource Record Set (RRset) of the DNS root zone.
ICANN has developed operational tests that software developers and operators of validating resolvers can access to evaluate whether their systems are prepared for the KSK rollover. You can learn more about these tests here.
As the KSK rollover draws nearer, all interested parties can learn more and get updates at https://www.icann.org/kskroll. Please share this resource with others and encourage them to learn about these upcoming changes to the DNS.
This post was originally published on ICANN website
Picture credits: Mike
The ICANN54 meeting in Dublin mid-October represented a key moment in the development of a proposal for the IANA stewardship transition. We are now entering a crucial time where all the pieces must come together in harmony, in order to cross the finish line.
Earlier this year, I wrote that 2015 would be a busy year for the Internet, and it most certainly has been just that.
For the past 19 months, the ICANN / Internet community, led by the hundreds of participants in the IANA Stewardship Transition Coordination Group (ICG) and the Cross Community Working Group on Enhancing ICANN Accountability (CCWG-Accountability) in particular, has spent a significant amount of time developing possible mechanisms to replace the US Government’s role, and ensuring that ICANN has the right accountability and governance systems in place to allow the international multistakeholder community to effectively exercise its supervisory role in future.
This historic journey started last year, in March 2014, when the United States government announced its intention to transition its historical supervision of the IANA functions to the global multistakeholder community.
These functions, administered by ICANN under contract with the US Government’s National Telecommunications and Information Administration (NTIA), deal with the global coordination and maintenance of the Internet’s unique identifiers, such as domain names and IP addresses.
How did we get to where we are now? The latest proposals included setting up IANA as a legal entity and affiliate of ICANN, which would be subject to reviews by new dedicated operational committees, based on enhanced performance reporting.
A system of escalation would ensure that the IANA functions are performed properly and that any emerging problems would be dealt with swiftly.
These new mechanisms would come with enhanced community powers, notably in relation to the ICANN Board and appeals processes.
Once precise proposals emerged, opinions started to polarize on possible alternatives, which is fairly common at this stage of discussions when dealing with such evolutionary organizational changes on an international level. Those of us ensconced in Brussels-level negotiations are familiar with these kinds of interactions.
The ICANN54 meeting in Dublin mid-October represented a key moment in the development of a proposal for the transition.
Dublin seems to have provided the right level of positivity; the right setting for the community to work through a number of important questions and move toward a revised set of proposals, particularly as regards accountability and governance issues.
After weeks of intense discussions, we witnessed another success story for the multistakeholder model. Stakeholders from across the spectrum of interests – business, civil society and government representatives – focused on finding a path forward that everyone could agree upon.
The IANA Stewardship Transition Coordination Group (ICG) finalized its work at the end of ICANN54 in Dublin, and is currently discussing potential implementation-related work.
As for the CCWG-Accountability, current plans include:
* As of the 15th November, a 36-page formal update on progress has just now been released. We highly encourage you to review this document here. The full Third Draft Proposal on will be shared with the public on 30 November 2015, with a 21-day public comment period that will begin then end on 21 December 2015. This will be announced on ICANN.org then, so please keep an eye out for it and get involved
* Pending no major changes or concerns raised during the public comment period, the group aims to submit a proposal to the ICANN Board by mid-January 2016. It would then be sent to the U.S. Government for review, and implementation would then likely begin later in the year.
This is where we stand as of today; a crucial time where all the pieces must come together in harmony, in order to cross the finish line.
With all stakeholders getting involved and providing input, we look forward to seeing the community produce a consensus-led proposal in the time frame outlined.
photo credit: Alpha du centaure
The FCC’s decision to adopt utility-style regulation to the Internet is resulting in less investment and reduced deployment and it will inevitably lead to less robust competition in the broadband market, argues Brendan Carr, legal advisor to FCC Commissioner Ajit Pai.
The Digital Post: You suggested that the FCC decision to reclassify broadband as a utility could undermine the US telecom success story. What are the main negative consequences?
Brendan Carr: The FCC’s decision to apply heavy-handed, utility-style regulation to the Internet is putting the U.S.’s success story at risk. It is already leading broadband providers to cut back on their investments and put off network upgrades that would have brought faster speeds and more reliable broadband to consumers.
And the decision to put the U.S.’s success at risk was an entirely unnecessary one. In the 1990s, American policymakers decided on a bipartisan basis that the Internet should develop unfettered by government regulation.
Regulators applied a light-touch regulatory framework that led to unparalleled levels of investment and, in turn, innovation.The private sector spent $1.3 trillion over the past 15 years to deploy broadband infrastructure in the U.S. That level of investment compares very favorably when you look at the International context.
A study of 2011 and 2012 data shows that wireless providers in the U.S. invested twice as much per person as their counterparts in Europe ($110 per person compared to $55). And the story is the same on the wireline side, with U.S. providers investing more than twice those in Europe ($562 per household versus $244).
Consumers benefited immensely from all of that investment. On the wireless side, 97% of Americans have access to three or more facilities-based providers. More than 98% of Americans now have access to 4G LTE. Network speeds are 30% faster in the U.S. than in Europe.
The story is similar on the wireline side: 82% of Americans and 48% of rural Americans have access to 25 Mbps broadband speeds, but those figures are only 54% and 12% in Europe, according to a 2014 study that looked at 2011 and 2012 data. And in the U.S., broadband providers deploy fiber to the premises about twice as often as they do in Europe (23% versus 12%).
Facilities-based intermodal competition is also thriving with telephone, cable, mobile, satellite, fixed wireless, and other Internet service providers competing vigorously against each other.
But unfortunately, the U.S. is now putting all of this success at risk. At the beginning of 2015, the FCC decided to apply public-utility-style regulation to the Internet over the objections of two FCC Commissioners.
I fear that we are already seeing the results of that decision. Capital expenditures by the largest wireline broadband providers plunged 12% in the first half of 2015, compared to the first half of 2014. The decline among all major broadband providers was 8%. This decrease represents billions of dollars in lost investment and tens of thousands of lost jobs.
And the decline in broadband investment is not limited to the U.S.’s largest providers. Many of the nation’s smallest broadband providers have already cut back on their investments and deployment. Take KWISP Internet, a provider serving 475 customers in rural Illinois.
KWISP told the Commission that, because of the agency’s decision to impose utility-style regulation, it was delaying network improvements that would have upgraded customers from 3 Mbps to 20 Mbps service and capacity upgrades that would have reduced congestion.
These and many more examples all point to the same conclusion. The FCC’s decision to adopt heavy-handed Internet regulation is resulting in less investment and reduced deployment. It will inevitably lead to less robust competition in the broadband market and a worse experience for U.S. broadband users.
But I am optimistic that the U.S. will ultimately return to the successful, light-touch approach to the Internet that spurred massive investments in our broadband infrastructure. Efforts are underway in both the courts and Congress to reverse the FCC’s decision. And following next year’s presidential election, the composition of the FCC could be substantially different than it is today.
The Digital Post: What is your opinion about the Net Neutrality legislation due to be adopted by the EU? What are the main differences with the Open Internet order?
Brendan Carr: I think the FCC’s decision to adopt utility-style regulation should serve as a cautionary tale for regulators that are examining this issue. FCC Commissioner Ajit Pai, who I work for, has described the FCC’s decision as a solution that won’t work to a problem that doesn’t exist.
When the FCC acted, its rulemaking record was replete with evidence that utility-style regulation would slow investment and innovation in the broadband networks. And the evidence on the other side of the ledger? Non-existent.
Net Neutrality activists have trotted out a parade of horribles and hypothesized harms, but there was no evidence whatsoever of systemic market failure. The FCC adopted utility-style regulations even though it presented no evidence that the Internet is broken or in need of increased government regulation.
In the absence of any market failure, consumers are far better served by policies that promote competition. Utility-style regulation heads in the opposition direction—it imposes substantial new costs on broadband providers and makes it harder for competitors, particularly smaller broadband providers, to compete in the marketplace. After all, rules designed to regulate a monopoly will inevitably push the market toward a monopoly
The Digital Post: Next year the European Commission will propose a major revision of the EU current framework on telecoms. From your perspective what should be the priorities?
Brendan Carr: When I met with government officials and industry stakeholders in Brussels, one point kept coming up: the need to increase investment in Europe’s broadband markets. And I agree that embracing policies that will spur greater broadband investment is a key priority. According to a Boston Consulting Group report that just came out, Europe will need an additional €106 billion to meet its Digital Agenda goals.
Historically, the U.S. embraced a number of policies that led to massive investments in broadband networks. For one, U.S. regulators embraced facilities-based competition. We rejected the notion that the broadband market was a natural monopoly.
Therefore, we pursued policies that encouraged broadband providers to build their own networks, rather than using their competitors’ infrastructure. For example, we eliminated mandatory unbundling obligations, which were skewing investment decisions and deterring network construction.
We also made it easier for facilities-based providers from previously distinct sectors to enter the broadband market and compete against each other.
For instance, by making it easier for telephone companies to enter the video market and cable companies to enter the voice market, we strengthened the business case for those carriers to upgrade their networks, since offering a triple-play bundle of video, broadband, and voice was critical to being able to compete successfully. Because of these policies, capital flowed into networks, and consumers benefited from better, faster, and more reliable broadband infrastructure.
We also took steps on the wireless side to promote investment and competition. We embraced a flexible use policy for wireless spectrum. Instead of mandating that a particular spectrum band be used with a specific type of wireless technology, the government left that choice to the private sector, which has a much better sense of consumer demand.
This enabled wireless networks in the U.S. to evolve with technology and to do so much more quickly than if operators had to obtain government sign-off each step of the way. Having license terms and conditions that are relatively consistent across spectrum bands has also made it easier for providers to invest in the mobile broadband marketplace.
The Digital Post: The EU is still grappling with a fragmented and somewhat rigid approach to spectrum, despite the efforts of the European Commission. What can Europe learn from the FCC policy on spectrum?
Brendan Carr: The FCC’s spectrum policies have led to a tremendous amount of innovation and investment in our wireless networks. I would like to highlight a few of those here.
First, the FCC has embraced a flexible use policy for wireless spectrum. Instead of mandating that a particular spectrum band be used with a specific type of wireless technology, the government left that choice to the private sector, which has a much better sense of consumer demand.
This has enabled wireless networks in the U.S. to evolve with technology and to do so much more quickly than if operators had to obtain government sign-off each step of the way. For instance, nearly 50% of all mobile connections in the U.S. are now 4G, whereas that figure is only 10% worldwide.
Second, the FCC makes spectrum bands available on a nationwide basis with relatively uniform license terms and build out obligations. So rather than auctioning licenses that cover only part of the country one year and then auctioning other licenses in another year, all of the licenses for a particular spectrum band are offered in the same auction.
This approach gives broadband operators greater certainty and helps them plan their deployments while minimizing transaction costs. It also makes it easier for operators to obtain handsets and other equipment that will operate on their spectrum bands. All of that ultimately means that consumers get access to the spectrum faster and at lower costs.
Third, the FCC tries to keep its eye on filling the spectrum pipeline. It takes years for new spectrum bands to be brought to market, and so waiting for consumer demand to increase before starting the process of allocating more spectrum for consumer use is not an efficient approach.
The U.S. has engaged in a continuous process of reallocating spectrum for mobile broadband. We auctioned AWS-1 spectrum in 2006, 700 MHz spectrum in 2008, 65 MHz of mid-band spectrum earlier this year, and we’re set to auction our 600 MHz spectrum in 2016. To date, our spectrum auctions have over $91 billion for the U.S. Treasury.
Fourth, the FCC has embraced policies that make it easier for operators to deploy their spectrum. One way we’ve done that is by adopting what the FCC calls “shot clocks.” These require state and local governments to act on an operator’s request to construct a new tower or add an antenna to an existing structure within a set period of time, say within 90 or 180 days.
Another step the FCC has taken is to streamline the process of obtaining the historic preservation and other approvals that are required when an operator deploys broadband infrastructure. Combined, these actions have allowed spectrum to be deployed faster and have meant that consumers get quicker access to new mobile broadband offerings.
photo credit: Eris Stassi
Stefano Trumpy, the former delegate for Italy in the Governmental Advisory Committee of ICANN, explains why the transition process of IANA functions has been delayed and what to expect in the following months.
The Digital Post: The US government has announced that it will renew the IANA contract for one year, pushing back the IANA transition deadline date to 1 October 2016. What was your first impression?
Stefano Trumpy: I have been the Governmental Advisory Committee Representative for Italy from 1999 through 2014 and now I am operating in EURALO.
ICANN creation has been a follow on of the white book signed in 1988 by Clinton-Gore asking for internationalizing the management of DNS and for an increased offer of generic Top Level Domain Names.
ICANN started it’s operations in 1999 with a Memorandum of Understanding with the US Department of Commerce – NTIA conceived to last two years; actually the MoU was renewed every two years until 2009 when the MoU has been substituted by the Affirmation of Commitments (AoC) with NTIA signed in September 2009.
The AoC didn’t stop the continuation of the IANA zero dollars contract between ICANN and NTIA. The 2014 announcement revealed the final intention of US government to cease the oversight of the DNS management.
To be noted that setting up ICANN as an experimental multistakeholder initiative to internationalize the DNS management was conceived during a democratic US presidency; the evolution of the US government direct monitoring of ICANN towards the AoC agreement happened in 2009 under another democratic Presidency as well as the NTIA announcement of 2014.
If we look at the IANA project to be discontinued in 2016, US will be in front of next elections campaign to renew the US Presidency.
It is evident that among the republicans there are some perplexities about the interruption of an activity that could be considered as an important asset for US industry.
Up to now there are no signals in the auditions in the senate that the republicans want to suspend the IANA transition but they have started to put conditions that could render the transition more problematic.
The Digital Post: What are the main reasons behind this delay within the IANA transition?
Stefano Trumpy: The main reasons are connected to the hard work made by the multistakeholder groups involved in preparing the final proposal for the transition to be submitted to the NTIA – DoC a few months in advance of the next deadline of the IANA project.
An enormous amount of work has been engaged in order to meet requirements stated in the announcement of NTIA of March 14 2014 in order to meet the deadline of contract foreseen for end of September 2015.
The groups involved in preparing the transition project are:
– IGC (IANA Stewardship Transition Coordination Group) that is operating independently from ICANN;
– CCWG (Cross Community working group on enhancing ICANN Accountability) that is operating inside ICANN structure
– CWG (IANA Stewardship transition proposal on Naming Related functions) that is operating inside ICANN structure.
An estimated amount of working hours dedicated up to now by the mentioned working groups to prepare the material for the IANA transition is in the order of fourty thousand; the agreements of involved constituencies in some of the hypothesis left open for final approval need some more work but the major part of the work is already done.
The Digital Post: Considering that once the community proposal is finalized, it will then take a few months for the US Gov., i.e. for NTIA, to evaluate and adopt it, not to speak about the implementation process, don’t you fear that the delay could well be more than 1 year?
Stefano Trumpy: NTIA – DOC needs some time to evaluate the IANA transition proposal as regards the respect of the conditions imposed in March 14th 2014 summarized in the following:
a) Assure DNS management more secure and accountable
b) No further governments direct involvement in DNS management and IANA function in particular.
Therefore the proposal should be delivered possibly not later than in early spring 2016. In my opinion, having followed remotely the auditions in the US Senate on the IANA transition, it could happen that the proposal approval will take more than 4 or 5 months; In a recent statement, NTIA Administrator Larry Strikling said the US contract with ICANN could be further extended up to a limit of three years.
The Digital Post: Some observers warn that if the US abandons its oversight of core internet functions this may open the door to a more inter-governmental approach with countries like China and Russia seeking to have a disproportionate influence in the operation of the Internet that would have otherwise been kept at bay by US government watchdogs. What is your opinion?
Stefano Trumpy: NTIA – DoC has been very clear on the aspect of governmental involvement in DNS management and IANA service in particular; then it is clear that if for example China and or Russia will try to have voice in the management of IANA, the IANA transition proposal will be rejected by the US.
Another personal consideration is that, after a successful IANA transition, nothing will change that could ease disproportionate influence in the operation of the Internet by other countries.
If, in the end the transition will not take place, my guess is that the international debate referring to the privileged role of US in directing IANA service, will go ahead in the post WSIS + 10 years with an enormous amount of energies spent to diplomatically oblige US to abandon it’s supervisory role on Internet’s addressing system.
Therefore, I really hope that the condition will be met to present a satisfactory IANA transition project by March or April next year to NTIA. What means satisfactory?
I recommend the IGC that, when there are different options to assure the continuation of the present role of NTIA, please chose the simpler solution that guarantee a smooth continuation of the service and do not disturb the operational role of ICANN.
Stefano Trumpy: Born in 1945. Engineering degree. Director of the CNUCE Research Institute of the National Reseacrh Council from ‘83 through ‘96. Pioneer of the introduction of the Internet in Italy. Administrator of the ccTLD ".it" since its inception in 1987, until 1999. Delegate for Italy in the Governmental Advisory Committee of ICANN (1999-2014). He brought the CNUCE Institute among the founders of the Internet Society (ISOC) in 1992 and he is the Chair of the Italian Chapter of the Internet society. He is a member of the promoting committee of IGF Italy. He participated, since the beginning, in the Internet Governance Forums promoted by the United Nations.
photo credit: 1-defending-icann
Will large emerging countries manage to reshape internet governance around their national interests? One thing is sure: tomorrow’s internet will not resemble today’s.
In recent years, global issues connected to the internet and its uses have vaulted into the highest realm of high politics. Among these issues internet governance is now one of the most lively and important topics in international relations.
It has long been ignored and restricted to small silos of experts; however, the leaks disclosed by Edward Snowden on large-scale electronic surveillance implemented by US intelligence agencies triggered a massive response to the historical “stewardship” of the internet by the United States.
Not surprisingly, the stakes are high: today 2.5 billion people are connected to the internet, and by 2030, the digital economy is likely to represent 20% of the world’s GDP. In emerging countries, the digital economy is growing from 15% to 25% every year.
Studies evoke 50 even 80 billion connected “things” by 2020. Beyond mere figures, internet governance sharpens everyone’s appetite – from big corporations to governments – for the internet has taken up such a place in our lives and touches on so many issues, such as freedom of expression to privacy, intellectual property rights, and national security.
It is worth underlining that the issue is particularly complex. For some, the governance of the internet should respect free market rules – a deregulated vision carried by the Clinton-Gore administration in the 1990s –, or remain self-regulated by techno-scientist communities as conceived of by libertarian internet pioneers.
For others, the advent of the internet in the area of law-making implies a return to the old rules and instruments, but this would mean putting aside the mutations produced by its practices, most importantly the expansion of expression and participation. For others, again, the ultimate legitimization would consist in adopting a Constitution or a Treaty of the internet which would elevate its governance to the global level.
De-Westernizing the internet?
A number of countries have criticized American “hegemony” over the internet (infrastructure, “critical resources” such as protocols, the domain names system, normative influence, etc.). To a large extent, the internet is the ambivalent product of American culture and the expression of its universalist and expansionist ideology.
As U.S. policymakers emphasized the importance of winning the battle of ideas both during the Cold War and in the post-2001 period, the ability to transmit America’s soft power via communications networks has been perceived as vital.
Consequently, in recent years, particularly since the Arab uprisings, governments around the world have become more alert to the disruptive potential of access to digital communications. Demographic factors are also behind calls for change: over the next decade, the internet’s centre of gravity will have moved eastwards.
Already in 2012, 66% of the world’s internet users lived in the non-Western world. However, the reasons for questioning the U.S.’s supremacy also lie in these countries’ defiance of the current internet governance system, which is accused of favoring the sole interests of the U.S.
While critical of the status quo, large emerging countries do not constitute a homogeneous block. Back in December 2012 in Dubai, when the Treaty to revise the International Telecommunication Regulations (ITRs) was closely negotiated, some countries such as India, the Philippines and Kenya had rallied behind the U.S.
The Dubai negotiations nevertheless showed that these “swing states” – countries that have not decided which vision for the future of the internet they will support – are increasingly asserting their vision in order to get things moving.
Placed under the auspices of the United Nations-led International Telecommunications Union (ITU), the Dubai meeting therefore served as a powerful tribune to both contest American preeminence and call for multilateral internet governance.
More fundamentally, these tensions reflect another conception of the internet, which lays on a double foundation: on the national level, the claim that states have sovereign power over the management of the internet; and on the international level, the preeminence of states over other stakeholders, and the notion of intergovernmental cooperation to debate internet governance.
To this end, the arguments developed fit into a geostrategic context which has been reshaped by the emergence of new poles of influence. They are aimed at making the internet an instrument of both the domestic and foreign policies of one country. The preservation of state order, the fight against cybercrime, and the defense of commercial interests are several illustrations of elements that can be used to justify and advance the questioning of the current system.
China, given its demographic, economic and technological weight, is emblematic of the current “game”. Overall, China has sought to adopt a pragmatic approach: if Beijing does not agree with the concept of the Internet Governance Forum (IGF) – the so-called “multi-stakeholder” principle would not guarantee an equal representation between the different stakeholders and regions of the world. It nevertheless integrated ICANN’s Governments Advisory Committee in 2009, and is now very active in promoting its own standards within the organizations where technical norms are negotiated.
Russia, for its part, has put forward several initiatives at the U.N. over the last fifteen years – all of which have built upon a firm opposition to the U.S. and have defended a neo-Hobbesian vision in which security considerations and the legitimacy of states to ensure their digital/information sovereignty play a critical role. Moscow has thus been active within U.N. intergovernmental agencies such as ITU, and regional ones such as the Shanghai Cooperation Organization (SCO) and the BRICS forum.
And then came Snowden
The stances taken by emerging countries unsurprisingly found favorable echoes after Edward Snowden’s revelations in June 2013. If Russia opportunely stood out by granting asylum to Snowden, Brazil promptly expressed its dissatisfaction.
President Dilma Rousseff, herself a victim of NSA wiretapping, took the lead of a virtuous crusade against the status quo: with the loss of the U.S.’s moral leadership, their stewardship over the agencies which manage the Internet is less tolerated. At the U.N. General Assembly, Rousseff somewhat aggressively criticized Washington, as such showing a will to federate emancipation towards the U.S. dependency.
Brasilia then intensified its diplomatic offensive by announcing an international summit on Internet governance – called NETmundial – to take place in April 2014 in Sao Paulo. In the meantime, Brazilian authorities promulgated the Marco Civil bill, a sort of Internet Constitution which guarantees freedom of expression, protection of privacy and net neutrality. Is the Brazilian stance in a post-Snowden context purely opportunistic?
Interestingly, Brazil appears to be taking the middle ground between the two governance “models” that have been under discussion so far – the multi-stakeholders and the multilateral – in a context where the Europeans have stepped aside.
Since the first World Summit for Information Society (WSIS) in 2005 Brasilia has been promoting free software and advancing a global internet governance model based on its own domestic model. Rousseff’s words fit into a long-term perspective, which sees in the opening of a new international scene – the Web – an opportunity to take the international lead, after the relative failures of former President Lula to position Brazil on international security issues.
The world is not flat
Will large emerging countries manage to reshape internet governance around their national interests? In the shift that was the last ITU’s WCIT meeting in Dubai in December 2012, the excessively polarized debates between self-proclaimed partisans of an “open and free” internet and the supporters of a governance resting on territorial sovereignty sparked off a strained discourse over a “digital Cold War” preceding an “internet Yalta”.
Since Snowden’s revelations emerged, the American reaction has particularly focused on storytelling: since states around the world question the U.S. oversight over the internet, it is because they want to fragment and “balkanize” the global internet – a discourse largely passed on by U.S. Net giants.
Well, the commercial strategies of the major internet companies themselves tend to intensify the fragmentation of online public spaces by creating distortions in internet users’ access to information and content sharing, that is to say by reducing both the openness and pluralism that have made the internet a great social value.
Here lies a powerful engine for contest, as it has been recently the case in Western Europe. Borders do reappear where they were not necessarily expected: Google, Apple or Amazon are building their own ecosystem, from which it is becoming hard to get out.
One thing is sure: tomorrow’s internet will not resemble today’s. Already the power of search engines diminishes the importance of the domain names system; cloud computing, the Internet of things and the spread of mobile internet are starting to radically transform practices and produce new complexities with regards to the internet’s outline and governance.
It is also certain that the situation will remain at a dead end if the two broad and opposed conceptions of the internet persist: a new space of freedom or a new instrument of control.
Photo credit: Paul Downey
The transition process has provided a unique opportunity for the global community to gather with a shared purpose, to achieve the common vision of evolving the core Internet functions efficiently.
It’s over a year since work started for several multistakeholder groups on the transition of the US Government stewardship over the IANA functions.
One of the four key conditions set by the US Government was that the transition proposal should ‘Support and enhance the multistakeholder model’.
If the transition process itself is anything to go by, this condition will easily be met. In fact, this process has been a remarkable embodiment of the model, and is helping to make it stronger. The transition has provided a unique opportunity for the global community to gather with a shared purpose, to achieve the common vision of evolving the core Internet functions efficiently.
Bringing in these different views, perspectives and personalities together could have been a major challenge. Instead, the remarkable progress achieved so far, shows that this challenge has turned into a hugely positive exercise. The different stakeholder groups and different regions of the world, have come together as a team to produce high quality, highly researched work. The mix of lawyers, economists, engineers, civil society and user voices or academic experts in governance has ensured much-needed, robust exchanges and solidly developed material. The sensible and well thought-out proposals that have emerged are a testament to that impressive, pioneering collaboration.
And if several extra weeks have been taken here and there to ensure that the proposals would be as robust and consensual as possible, it has been a quick process by any standard: it would be hard to find an example in history of such a global exercise and major, critical evolution happening in such a short space of time, and with such quality and cohesiveness.
From a European perspective we can be proud, too: European stakeholders have been very active in the transition discussions, with in particular several positions of co-chairs of working groups held by Europeans. Several of these co-chairs joined us at a recent event on Internet Governance held by the European Internet Forum (EIF) in Brussels, where during his keynote intervention Fadi Chehade underscored how our region’s participation, with its vast experience in building collaborative institutions, has brought strong input into both the structural aspects of the transition, and the accountability and governance work.
The Multistakeholder model comes out of this process not just as the proven way of coordinating the management of critical Internet resources, but more importantly, it is reinforced as a crucial method for handling the complex, transnational endeavours of our global age.
Our community should be proud to have pioneered and evolved this system which drives successful global cooperation – a worthy direction for the future.
Originally posted on: ICANN blog
photo credit: Eric Fischer
2015 is going to be a busy year for the Internet – and not just in Brussels with the recent arrival of an ambitious new team of Commissioners but globally – with the evolution of Internet governance and the IANA functions Stewardship Transition.
It will be 10 years since the conclusion of the UN World Summit on the Information Society (WSIS), a 4 year long process held in two phases which produced a number of declarations and engagements, and created the Internet Governance Forum (IGF) as well as the endorsement that governance of the Internet should be ‘multi-stakeholder’.
Initially the WSIS was to focus – rightly, many would say – on ‘bridging the digital divide’. But the more political discussions usual for a UN setting soon focused on the topic of Internet Governance: who rules the Internet? How can it be controlled?
And there was no clear conclusion on that.
[Tweet “This is because in fact, no one does control the Internet.”]
The running of the core functions behind the Internet is coordinated – a better word than ‘governed’ – by a distributed collaborative of processes, mechanisms, and organisations, each distinct and interdependent on one another.
The global and cross-border nature of the Internet challenges the concept of governance by only governments or groups of government. Partly, this is why these various governance processes have evolved organically to be both global and ‘multi-stakeholder’ in nature, resulting in a pioneering democratic effort to tackle these challenges in a novel approach. Many, and we at ICANN, consider the multistakeholder model to be the most effective, open and transparent structure.
So why is 2015 going to be busy?
Well a lot happened this year, paving the way for the unfolding of processes in 2015. The inevitable shift from dialogue to action. The Net Mundial conference of Sao Paulo in April 2014 in particular, came out of the realisation of the need to move to a next level in Internet Governance.
It gathered all stakeholders to draft and adopt through ‘rough consensus’ a series of important principles, starting with the respect of human rights and privacy online, and a roadmap for further improvements and evolutions of the system, including for the Internet Governance Forum, as well as a whole gamut of other aspects.
ICANN itself is in the middle of major evolution, with its globalization efforts, the transition of the IANA functions, together with a review on how to further enhance ICANN Accountability & Governance.
ICANN has gone into a major programme of globalisation over the last two years in particular, with the opening of two operational hubs in Istanbul and Singapore, so that we are able to serve the global Internet community at anytime, anywhere. Already about a third of ICANN’s staff is based internationally, and this is growing.
Likewise, we have embarked on a major effort of engagement of stakeholders around the world, to build capacity and encourage more participation from people from all over the world in ICANN.
We want to ensure that our community is representative of the global nature of the Internet; that is true for our staff, our stakeholders, our Board.
Then there is the topic, which has grabbed the headlines around the world this year: the intention to transition the US Government’s historic role of oversight of the core IANA functions, which ICANN administers, to ‘the global multi-stakeholder community’ by the end of September 2015.
At the end of the process, all those concerned with the Internet, from the technical community and governments to civil society, will have the equal responsibility for overseeing these key functions.
What the U.S. is actually doing is preparing to transition its stewardship of a narrow set of technical functions performed by ICANN within the Internet’s infrastructure to … you, as part of the global multistakeholder community.
The IANA functions include the allocation and maintenance of the unique codes and numbering systems of the Internet (such as Internet Protocol addresses).
The U.S. announcement in March 2014 set into motion two open, public processes. One is for the global Internet community to develop a proposal for this stewardship transition. The second effort is to enhance ICANN’s governance and accountability mechanisms in light of the US Government’s transition away from its stewardship role.
This is an important moment in the history of ICANN; a testament to how the organisation and its community have matured.
We now have a multi-stakeholder model of governance and operational mechanisms that are ready to function on their own, led by a community of stakeholders rather than a central, top-down authority, having demonstrated the efficient management and coordination of the Domain Name System by ICANN and the Internet technical community over the past 16 years.
ICANN’s mission is to maintain an open, singular and secure Internet. The global Internet is a unique tool that brings together mankind. It is incumbent upon us all to keep that way: open, unique and global.
Working on increased access to an open, global, interoperable and expanding Internet is good for business and national economies.
And the opposite is true: if we compartmentalise the Internet, we would lose the vast benefits of cross-border exchanges, trading, free flow of information and knowledge, etc. that come with it.
[Tweet “Today, the Internet and everything to do with it is undergoing an evolution.”]
Everybody knows the importance of the Internet – as individuals, as organisations, as societies and as nations, making understanding the current evolution process the more imperative to us all. 2015 will be all about this evolution and how best to serve the global community in relation to the next phase of Internet Governance.
We invite you to join us as a participant or an observer along any portion of this journey. This is how we will together sustain a global, unified Internet.
Visit www.icann.org/stewardship to get involved.
photo credits: Steve Rhode
Rigid net neutrality rules risk becoming an ineffective remedy to a badly defined problem. That’s why politicians should leave such a complex issue to technical, independent regulators. A restrictive approach would not foster innovation as many argue.
U.S. President Barack Obama’s recent statement in favor of net neutrality is a good example of why politicians should stay away from bold statements when dealing with complex issues. And indeed, net neutrality is so complex, technically, economically and politically, that no one has found the way to square the circle: the aggravating and confusing factor is that the word “neutrality” sounds appealing, whereas “diversity” and “discrimination” inevitably sound negative to politicians.
This is why it is better to leave the hot potato to technical, independent regulators. President Obama certainly had good intentions: but there is reason to doubt that what he is advocating (putting unprecedented and ill-advised pressure on the FCC) would make users better off. Here’s why.
[Tweet “The Internet is not neutral, and will never be. “]
As often invoked by neutrality advocates, it was designed to guarantee end users against discrimination and usage limitations, and to allow no intrusion or inspection of files by any central “intelligence”.
However, this is not what the Internet is today, and not only because of the recent scandals generated by massive surveillance by government authorities in many countries. Since the 2010 FCC Open Internet Order entered into force, the “information superhighway” has become populated by cars with different engines and many toll lanes, which allow different speeds.
Companies such as Apple, Microsoft, Google, Netflix and may others make regular use of traffic acceleration services, either developed in-house or purchased from third parties such as Akamai, Limelight, Huawei, Level 3. This is why some services work better than others on the Internet: in a fully neutral network, this would not be possible.
Mandating net neutrality for telcos and cablecos would not make the Internet neutral: the players that are able to either invest in their “content delivery networks” or purchase expensive services from third parties will still have a toll lane that others can’t afford.
Second, “over the top” products and services such as search engines, wireless and cloud platforms are not (and should not be made) neutral. Giant wireless platforms such as Android, iOS, Windows give priority to certain apps over others, and even block certain (very few) applications. They carry their own default browsers and apps.
As search engines Google, Yahoo! And Bing have to show some results first, and must do it in a way that match their users’ preferences; giant cloud providers such as Amazon and Microsoft sell their suites that include some favorite products, leaving others out or in second row.
A neutral Internet would entail that all these companies refrain from customizing services for their end users: indeed, the European Commission seems to be lured by the sirens of “search neutrality” and “platform neutrality” in its antitrust investigation against Google. Would this be good or bad? Most likely, bad.
Third, mandatory net neutrality would not foster innovation as many argue. A “mantra” of neutrality advocates is that net neutrality is the only guarantee that a “new Google” or a “new Facebook” will emerge in the future, just as these successful young companies have done in the past.
But reality is different: try to name recent examples of successful start-ups, and see how many of them have emerged as new “apps” for existing platforms.
This shows how the non-neutral world of Internet platforms is lowering, rather than raising, barriers to entry in the marketplace. The same is happening in the cloud: as companies compete to become the leading cloud provider, they have an incentive to host as many promising start-ups as possible on their platforms: this is why Internet hyper-giants do not initially charge start-ups for services such as sub-domains, enterprise tools, search engine optimization capacity, and access to content delivery networks.
Based on the above, mandatory net neutrality risks becoming an ineffective remedy to a badly defined problem. If it is imposed only on telcos and cablecos, then the Internet will remain non-neutral as it is today, and competition for traffic acceleration services might even be reduced. But if neutrality is extended to search engines, operating systems, wireless platforms, then the Internet will die.
This is why FCC Chairman Wheeler is rightly careful: the solution to the problem can only be cautious and, if anything, deferential to the extraordinary value that the non-neutral Internet is creating for our society every day. This does not mean that specialized services should be left entirely unregulated.
To the contrary, they might well deserve careful monitoring, a good dose of technology to monitor quality of service, and sharpened competition rules.
Most importantly, there is a need to avoid that the end-to-end Internet is cannibalized by one-way networks: otherwise, video will kill (also) the Internet star. A nuanced solution, based on the healthy co-existence of specialized services and best effort Internet, is the best suited to the ever-changing nature of the Internet: to the contrary, imposing neutrality would be tantamount to throwing out the (cyber-)baby with the bath water.
[Tweet “The temptation to be resisted is praising neutrality as synonymous of freedom, democracy, openness.”]
It is not. Full-fledged, rigid net neutrality rules are equivalent to what the Trabant was in Eastern Germany: the only car that people could have, very neutral, very bad, identical for everybody.
It became famous in the Western world when the Berlin wall fell 25 years ago, and thousands of East Germans drove their Trabants over the border: once in the “free” world, they immediately abandoned their “neutral” cars, and started a new, non-neutral life.