All parties and stakeholders should continue to work hand in hand for high data protection’s standards all over Europe and generate the trust that is needed to reap the benefits that the digital revolution can provide.
The biggest lie on the Internet is ‘I have read and understand the Terms and Conditions’. At best one briefly scans a document that would otherwise make for a long and tedious read in legalese, especially for a non-English speaker.
In truth, no one really reads the fine print. To be perfectly blunt, who has the time – or desire – to ponder over a lengthy legal document in order to obtain access to a service or app?
Users of these services often have no other alternative. But by accepting their terms, they weaken the control they have over their own data. It is unclear whether these conditions are always lawful and proportional.
Furthermore, users are obliged to accept regular updates. Previously, one had the option of installing them or not, but not anymore.
These obligatory updates occasionally lead to critical problems and after an update, users must verify their privacy settings, as changes can be made without explicit notification. To make matters worse, public authorities sometimes ask us to use these technologies to interact with them.
Actions can and should be taken to protect European users. New ICT technologies should guarantee the privacy of potential users prior to their introduction. Effective privacy enforcement should be guaranteed by demanding privacy by design and fostered by mechanisms that prevent the unnecessary collection of data.
The handling of personal data should be more transparent. Companies should collaborate on these issues, and regulation should define what minimum level of security is reasonable.
A number of alternative approaches are possible.
Prior to the introduction of new operating systems, services and applications, a certificate of conformity as proof of compliance with the EU General Data Protection Regulation and national Data Protection Acts could be required. A permanent independent group of experts could be established to execute mandatory checks.
Service providers could adopt a more preventive approach. The existing opt-out approach could be replaced with an opt-in model, whereby the transfer of personal data is explicitly authorised by the user and default settings initially prevent such a transfer.
Service providers could clearly inform users what data is transmitted and guarantee that none will be without their explicit authorisation. They should also ensure that third parties cannot obtain this data.
The European Commission’s recent proposal to introduce new legislation to guarantee privacy in electronic communications is a step in the right direction.
But all parties and stakeholders should work hand in hand to protect consumers and companies and generate the trust that is needed to reap the benefits that the digital revolution can provide. Together let us stop the biggest lie on the Internet.
Picture credits: InsideOut Project
The 10-year extension of the IGF mandate is a testament to the IGF’s significant evolution over the past decade as the leading global forum for dialogue on Internet governance issues. What’s next?
As the Internet continues to evolve at breakneck speed, many critical issues still need to be addressed. A major area is Internet governance. This broad realm encompasses both governance of the Internet (essentially the business of ICANN and other technical organizations) and governance on the Internet (a range of issues affecting services and content, such as privacy and cybersecurity).
Last year, the U.N. General Assembly approved the renewal of the mandate of the Internet Governance Forum (IGF) for another 10 years. Born in Tunis at the end of the second phase of the World Summit on the Information Society (WSIS), the IGF has evolved to serve as the leading global forum for dialogue on Internet governance issues. Since the first forum in 2016, the IGF has been an annual event.
The IGF now gathers a growing number of experts from academia, civil society, governments, industry and the technical community. Traditional topics of Internet governance involve setting rules, standards, policies and providing technical support so that the world can be connected on the global Internet. Going beyond the technical issues, the IGF also deals with complex social, economic and transnational issues related to the use of the Internet.
Getting to where we are today has been both a challenging and rewarding journey that is still in progress.
The IGF has gone through times of skepticism about both its continued existence and its ability to fulfill its mandate. Over time, the IGF has gradually expanded beyond its narrow circle as a “discussion only” forum to include processes that can produce tangible and useful outcomes, seen in the Best Practices Forums (BPF) and the Dynamic Coalitions. The 10-year extension of the IGF mandate is a testament to the IGF’s significant evolution over the past decade.
Earlier this month, over 2000 participants from 83 countries came together in Guadalajara, Mexico, with hundreds more participating remotely, to attend the 11th IGF meeting, the first since the mandate’s renewal.
As in previous years, ICANN’s Board directors, community leaders and senior staff attended the IGF. But unlike past years, the role of ICANN and the Internet Assigned Numbers Authority (IANA) functions did not take center stage in Guadalajara.
This is thanks to the Internet community that worked hard over the past few years to finalize the transition of the U.S. Government’s stewardship of the IANA functions to the global multistakeholder community. Instead, this year’s debate was focused on lessons learned from the IANA transition as a recent and successful example of a multistakeholder process in action.
With over 200 sessions, the 2016 IGF agenda covered the standard topics of Internet governance such as access, diversity, privacy and cybersecurity; plus more current issues related to online trade, the Internet of Things (IoT) and the U.N.’s Sustainable Development Goals (SDGs).
Links between SDGs, Internet governance and the IGF figured strongly on the agenda, with a main session and several other workshops organized on this topic. A common sentiment this year was that the IGF should focus more on the SDGs; a stance that was conveyed clearly during the “Taking Stock” session on the last day.
ICANN’s participation at IGF 2016 was led by CEO Göran Marby and Board Chair Stephen Crocker. Primary objectives were to emphasize the successful IANA stewardship transition as an example of how ICANN’s multistakeholder processes work, and to encourage participation in the ongoing work of ICANN’s Supporting Organizations and Advisory Committees.
ICANN’s goals are to continue supporting the multistakeholder model in Internet governance and contributing to global policy discourse with all interested parties – activities that are within ICANN’s mission and scope.
On the day before the event, ICANN organized a town hall session to reflect on the evolution of ICANN’s multistakeholder processes using the IANA stewardship transition as a case study. Presenters sought views from participants on their experiences with ICANN and how they envisage the challenges ahead.
In addition, ICANN community and organization staff planned and conducted workshops and roundtable discussions on a variety of topics such as the IANA transition, the new generic top-level domain (gTLD) Program, the role of noncommercial users in ICANN, law enforcement in the online world, and Asia and the next billion Internet users.
So, what’s next?
Geneva will host the 2017 IGF next December, and already discussions about strategic focus are underway. Holding the event in Geneva, the second home of the U.N. and to 192 government missions, may boost the participation of governments from developing countries and of non-U.S. businesses, both issues at Guadalajara.
The IGF Multistakeholder Advisory Group (MAG) will meet early in the new year to determine the focus for the 2017 IGF. At the top of the agenda will be how to deal with the call made by many in Guadalajara for more attention to meeting the targets of the SDGs. No doubt, the MAG may want to concentrate on other issues like human rights and global trade accords.
After all, the U.N. Human Rights Council meets in Geneva, and the World Trade Organization (WTO) is based there. Once the focus is set, preparatory work can begin for the Geneva IGF.
All in all, 2017 will be an interesting year in furthering key goals in Internet governance.
Picture credits: kvitlauk
The Digital Post speaks with FTC Commissioner Julie Brill about the new ‘Safe Harbour’, the implications of the EU privacy reform, and privacy issues arising from the boom of the Internet of Thing.
The Digital Post: The European Union and the United States of America have reached an agreement on a new Safe Harbour data treaty. What are in your view the main achievements of the deal? What would have been the concrete risks if an agreement weren’t signed?
Julie Brill: The main achievement of Privacy Shield is that it provides strong privacy protections for European consumers and creates a framework for more parties to engage in active supervision and stronger enforcement cooperation. With respect to commercial data practices, Privacy Shield will provide stronger privacy protections than Safe Harbor did – through beefed up onward transfer requirements, and in other ways.
Privacy Shield will also establish more active supervision of the program in practice, so that the Department of Commerce, the European Commission, European data protection authorities (DPAs), and the FTC can detect and address any issues that come up. Privacy Shield will also provide a well-defined process for consumers to complain about the data practices of Privacy Shield companies.
The FTC will remain committed to giving priority to complaint referrals from DPAs, and there will be a better process in place for following up on these complaints. And even in the absence of referrals from DPAs, the FTC will continue to aggressively look for violations of the Privacy Shield principles.
Finally, in the area of national security, the United States agreed to take the unprecedented step of designating an ombudsperson to take complaints about surveillance activities that relate to Privacy Shield. This is in addition to the significant reforms that Congress and President Obama have made to surveillance practices in the past few years.
The risks if Privacy Shield hadn’t been agreed upon would have been that consumers and businesses would have continued in the limbo in which we currently exist, where some mechanisms to transfer personal data from the EU to the U.S. are still allowed, but they are expensive, opaque, and much more difficult for the FTC to enforce.
Of course, Privacy Shield still has many steps to take before it receives approval. If it were not approved, then companies – particularly small and medium enterprises – would lose out because of the time and resources that they have to put into alternative arrangements for data transfers.
But consumers also would lose out because they would have far less transparency into which companies are handling their data, the rules governing data transfers, and where to go to complain if they believe their rights are not being respected.
TDP: According to some observers, the new agreement won’t be sufficient to meet the concerns of the European Court of Justice. What is your opinion?
JB: It’s important to remember that the CJEU’s Schrems decision did not address national security surveillance practices in the United States. Rather, the case was based on the court’s concern that the European Commission’s adequacy decision in the year 2000 did not address U.S. privacy protections relating to national security surveillance.
It is hard to say how the CJEU would have assessed a full, accurate record concerning surveillance practices and privacy protections in the United States, had those facts been before the court. In any event, the U.S. has enacted significant reforms since the Schrems case was referred to the CJEU, and the U.S. is making further commitments through Privacy Shield.
On the whole, I believe these protections meet the CJEU’s standard of “essential equivalence to the EU legal order”, but we will have to wait to see if Privacy Shield is challenged to know whether the CJEU agrees.
TDP: Is the GDPR going to widen the chasm between EU and US regulatory approaches to data protection? How the FTC is working on this issue?
JB: The GDPR incorporates several provisions that either appeared first in the United States or are by now very familiar to companies and enforcers in the U.S. Examples include a focus on reasonable data security through a continuing process of risk assessment and mitigation, a general security breach notification requirement, heightened protections for children, privacy by design, and a recognition that deidentification can reduce privacy and security risks.
There are some differences between the European and U.S. versions of these provisions, but overall they show how developments in the U.S. can influence the direction that Europe takes.
On the other hand, some provisions of the GDPR move further away from the U.S. approach. A prime example is the GDPR’s right to be forgotten article, which extends to all data controllers. This expansion is a sharp contrast to the very targeted and specific provisions of U.S. law that help individuals keep some information about themselves obscure.
Companies and regulators on both sides of the Atlantic need to start working out answers to the many questions that the GDPR raises. That’s one reason that I think it’s so important for us to move beyond the issues surrounding mechanisms for data transfers that have dominated the discussion for the past several months.
With the announcement of an agreement on Privacy Shield in the past several weeks, I hope we now can begin to discuss the GDPR and issues like big data and the Internet of Things in a more sustained and meaningful way.
TDP: The FTC has been focusing on privacy issues related to the booming sectors of Internet of Things and Big Data. What are the risks? How regulators should deal with this very sensitive issue?
JB: There are important roles for enforcement, policy development, and business and consumer guidance in the Internet of Things and Big Data ecosystems. On the policy and guidance front, the FTC has been taking a close look at the potential benefits and risks of the Internet of Things and big data.
We have hosted public workshops, taken public comments, and written key reports on the broad range of technical and economic concerns that arise from having many more connected devices, huge volumes of personal data, and rapidly improving analytics.
We heard a lot about the exciting possibilities to solve problems in health care, transportation, the environment, education, and other areas; but we also learned about significant risks. Security is a huge challenge with the Internet of Things.
Not only are many devices being offered by companies that do not have long track records with data security, but these devices are also being used in ways that collect highly sensitive information and create physical risks to consumers.
With respect to big data, we found that there is a potential for unfairness or discrimination to enter through biases in data collection and analysis. Some of these issues could get companies into trouble under fair lending, credit reporting, or other laws. Other issues arise in settings that these laws do not cover, but companies still need to be aware of them because they may be deceptive or unfair.
Enforcement also plays an important role in the FTC’s approach. We have already brought enforcement actions relating to privacy and security violations with IoT devices. We have the authority to stop unfair or deceptive practices – whether or not they involve new technologies and business practices – and we will use it in appropriate cases.
Picture Credits: g4ll4is
Web firms may have an interest in pursuing the monetization of users’ data with some more moderation. If they don’t, privacy concerns as well as adoption of tracking and advertisement blocking tools could grow to a point where innovation will suffer.
As part of a recent keynote during the inaugural workshop of the Data Transparency Lab (Nov 20, 2014, Barcelona) I hinted that a Tragedy of the Commons around privacy might be the greatest challenge and danger for the future sustainability of the web, and the business models that keep it going.
With this post I would like to elaborate a bit more on what I meant and maybe explain why my slides are full of happy, innocent looking cows.
What is the Tragedy of the Commons?
According to Wikipedia:
“The tragedy of the commons is an economic theory by Garrett Hardin, which states that individuals acting independently and rationally according to each’s self-interest behave contrary to the best interests of the whole group by depleting some common resource. The term is taken from the title of an article written by Hardin in 1968, which is in turn based upon an essay by a Victorian economist on the effects of unregulated grazing on common land.”
In the classical Tragedy of the Commons, individual cattle farmers acting selfishly keep releasing more cows onto a common parcel of land despite knowing that a disproportionate number of cows will deplete the land of all grass and drive everyone out of business.
All the farmers share this common knowledge, but do nothing to avoid the impending tragedy.
Selfishness dictates that it is better for a farmer to reap the immediate benefit of having more cows, diverting the damage to others and/or pushing the consequences to the future.
The utopian outcome for each farmer is that he can keep accumulating cows without having to face the tragedy because, miraculously, others will reduce the size of their herds, saving the field from becoming barren. Unfortunately, everyone thinks alike and thus, eventually the field is overgrazed to destruction.
Are there cows on the Web?
There are several.
Not only in .jpeg, .gif or .tiff but also in other formats that, unlike the aforementioned compression standards, can lead to (non-grass related) tragedies. In my talk I am hinting on the following direct analogy between the aforementioned cow-related abstraction and the mounting concerns about privacy and the web.
Farmer: A company having a business model around the monetization of personal information of users. This includes online advertising, recommendation, e-commerce, data aggregation for market analysis, etc.
Cow: A technology for tracking users online without their explicit consent or knowledge. Tracking cookies, analytics code in websites and browsers, browser and IP fingerprinting, etc.
Grass: The trust that we as individuals have on the web, or more accurately, our hope and expectation that the web and its free services are doing “more good than bad”.
The main point here is that if the aforementioned business models (farmers) and technologies (cows) eat away user trust (grass) faster than its replenishment rate (free services that make us happy), then at some point the trust will be damaged beyond repair and users … will just abandon the web.
As extreme as the last statement may sound, the reader needs to keep in mind that other immensely popular media have been dethroned in the past. Print newspapers are nowhere near when they used to be in, say, the 30’s.
Broadcast television is nowhere near where it’s height in the 60’s (think the moon-landing, JFK’s assassination, etc.).
The signs of quickly decaying trust on the web are already here.
– More than 60% of web traffic was recently measured to be over encrypted HTTPs, and all reports agree that the trend is accelerating.
– AdBlock Plus is the #1 Firefox add-on in the Mozzilla download page with close to 20 million users. Other browser or mobile app marketplaces are heavily populated with anti-tracking add-ons and services.
– Regulators on both sides of the Atlantic are mobilizing to address privacy related challenges.
If ignored, the mounting concerns around online privacy and tracking on the web may lead to mass adoption of tracking and advertisement blocking tools. Removing advertising profits from the web probably means the end of free services that we currently take for granted.
The impact on innovation will be a second negative consequence. Last, lets not forget that advertisement and recommendation is something desired by most users, provided that certain red lines are not crossed.
What constitutes a red line may change from person to person but certain categories are safe candidates (health, sexual orientation, political beliefs).
In a recent study we have shown that it is possible to detect Interest-based Behavioral Targeting (IBT) and have delved into specific categories to measure the amount of targeting that goes on.
What can we do to avoid an online tragedy of the commons?
“Sunlight is the best disinfectant”
The famous quote of American Supreme Court litigator Louis Brandeis may have found yet another application in dealing with the privacy challenges of the web.
Despite the buzz around the topic, the average citizen is in the dark when it comes to issues relating to how his personal information is gathered and used online without his explicit authorization.
A few years ago we demonstrated that Price Discrimination seems to have already creeped into e-commerce. This means that the price that one see’s on his browser for a product or service may be different than the one observed at the same time by user in a different location.
Even at the same location, the personal traits of a user, such as his browsing history, may impact on the obtained price.
To permit users to test for themselves whether they are being subjected to price description we developed (the price) $heriff, a simple to use browser add-on that shows, in real time, how the price seen by a user compares with the prices seen by other users or fixed measurement proxies around the world.
Researchers at Columbia University and Northeastern University have, in a similar spirit, developed tools and methodologies that permit end users to test whether the advertisements or recommendations they received have been specifically targeted at them, or they are just random or location dependent.
Tools like $heriff and X-ray improve the transparency around online personal data. This has multifold benefits for all involved parties:
– End users can exercise choice and decide for themselves whether they want to use ad blocking software and when.
– Advertising and analytics companies can use the tools to self regulate and prove that they abstain from practices that most users find offending.
– Regulators and policy makers can use the tools to obtain valuable data that point to the real problems and help in drafting the right type of regulation for a very challenging problem.
Mooo, who needs more tragedy????
Photo credit: b3d_
The new Safe Harbour ruling has shown the difficulties in adapting existing legal rules to the globalised, digital era. Online privacy legislation is clashing with modern business models, while European regulators are struggling to balance citizen rights with the desire to boost the competitiveness of the tech industry.
If someone were asked to guess which issue has generated the fiercest debate in Brussels in the recent months, the European Court of Justice (ECJ) Safe Harbour ruling should be the answer. Brussels has not stopped talking about it since the 6th October. Safe Harbour, which echoes a secure environment, no longer fits with the legal uncertainty and insecurity that the ruling has generated.
The ECJ ruled that the transatlantic Safe Harbour agreement, which allows American companies to use a single standard for consumer privacy and data transfer of private information between the EU and the US, is invalid.
With its ruling the ECJ has considerably challenged, if not disrupted this framework put in place to ease Trans-Atlantic information sharing, deeming it inadequate, especially in light of the surveillance allegations and scandals by USA intelligence services (including the NSA).
The upshot of the ruling is that there are now only limited pan-EU rules on data flow from Europe to the USA.
The ECJ has caused quite a stir in the tech world with its recent judgment. Tech companies, big and small, are scrambling to see what data they process and where it is transferred. Most multinationals are now legally obliged to suspend any transfer of its customers’ data to the USA and move their data storage and operations to an EU subsidiary.
Has anyone also quantified the economic implications of a real stop of data transfer between the EU and the USA? A power-generated black out is the best example I can think of.
The European Commission has therefore been put in a tough position. While it has to support the ruling by the European Court of Justice and guarantee citizens’ privacy, it had evoked the ire of the ICT industry. Trade and business associations are lobbying for a pragmatic solution namely via a transition period that would legalise the current Trans-Atlantic data flows.
The Commission has also promised guidelines for companies and data processors by early November and is working together with the national authorities to prevent fragmentation. But industry fears that this will not prevent headaches, stress and costs. A German data protection authority, for instance, has already warned it would fine non-compliant companies severely.
Meanwhile, Europe and the USA have also been negotiating a renewed Safe Harbour agreement. The ruling comes in the midst of these talks and will be an extra source of pressure. However, little can be done to accommodate the ruling unless America agrees to suspend its surveillance mechanisms on EU citizen data, which would be a very big ask.
In summary, the new Safe Harbour ruling has shown the difficulties in adapting existing legal rules to the globalised, digital era. Online privacy legislation is clashing with modern business models, while European regulators try to balance citizen rights with the desire to boost its tech industry and remain competitive. It’s a fierce storm with no lighthouse in sight.
Despite what many people may think, there is no real lack in capital supply for Europeans interested in launching their own start-ups in the digital domain.
The rise of (digital) technology start-ups is a global phenomenon, with extensive start-up ecosystems – such as the one in Silicon Valley – being replicated all over the world. Like any other region, Europe is highly interested in reaping the economic and societal benefits of a flourishing start-up economy.
In a recent speech, Neelie Kroes (the former Commissioner for Europe’s Digital Agenda) stated for instance that two out of three (!) new jobs in Ireland are created by start-ups in the first five years of existence.
Not all is rosy, though. Critics often say that it remains hard for European start-ups to get access to the proper financial means to kickstart their businesses.
But is that really the case?
It’s definitely not their biggest problem. Despite what many people may think, there is no real lack in capital supply for Europeans interested in launching their own start-ups in the digital domain.
Virtually each region has done a good job in developing the appropriate funding mechanisms to support start-ups’ launch activities. In other words: it’s not (all) about the money. As a matter of fact, three bigger threats to European start-ups’ longer-term growth can be discerned – culture, regulation and mindset.
A first issue is Europe’s fragmented market – not so much from a geographical perspective, but rather from a cultural one. Indeed, in spite of all good intentions, it remains difficult for European start-ups to sell their products across ‘cultural’ borders. The use of different languages is one obstacle, of course, but divergent social aspirations and cultural values are equally important barriers.
For example, selling a solution for personalized online advertising might be perfectly acceptable in one region because of the advantages it brings (instead of being spammed, one only gets to see those ads that are in line with his/her interests), but it may fail completely in cultures where this is perceived as a direct assault on people’s privacy.
Intra-European legal and regulatory barriers present additional obstacles. A concrete example is the burden that accompanies the launch of pan-European digital health solutions, with each European country having issued its own regulations related to the development, sale, usage and reimbursement of products and services in the digital health realm.
And finally, there’s mindset. Contrary to the US, where everything is big and aimed towards rapid international expansion, European start-ups typically have a more ‘provincial’ mindset. In today’s global, digital economy, though, that’s a major shortcoming. In order to really succeed, start-ups should have international ambitions right from the start.
As we observed already, none of those barriers exist in the US – making this geographical and cultural region a single, big ‘unified’ market with more than 320 million consumers.
Both its scale and transparency make it an easier target to introduce products and grow. A bit ironically perhaps, even conquering the rest of the European market is typically easier if done from the US…
So, how can we address those challenges? I see three important lines of action, in which European policy makers have a major role to play:
– From a regulatory perspective, measures should be taken to further unify the European market – so that its full potential of more than 500 million consumers and potential investors can be tapped.
Streamlining regulation in domains such as digital health, for instance, would already open up a wide range of growth opportunities for potentially hundreds of European start-ups.
Obviously, this would not help us overcome the cultural boundaries overnight; but to that end, instruments are already in place, such as the European Network of Living Labs (ENoLL), to help companies investigate how people will respond to new products and features – before the actual market launch.
– To foster the pan-European growth of start-ups and overcome the provincial mindset, a number of good initiatives have already been taken as well.
One concrete example is the creation of EIT Digital, which helps European start-ups accelerate their growth – o.a. by finding European and worldwide customers for their products and solutions, or by helping them raise funds.
– And finally, when it boils down to securing first customers, Europe should investigate the concept of ‘innovative procurement’– a best practice that has already been widely adopted by the UK and US administrations. It requires government bodies and local branches of big multinationals to allocate a certain percentage of their public procurement activities to innovative start-ups.
As such, start-ups can more easily get the necessary credentials and references to continue growing their businesses. According to certain estimates, public procurement is worth €2,000 billion to the EU economy – so dedicating even 1% of that amount to innovative procurement still equals €20 billion per year to support the European start-up ecosystem.
But also for that, a cultural and regulatory shift is required…
photo credit: Shumona Sharna
Protectionist policies, such as recently adopted German retrictions on public sector cloud use, can ultimately translate into a threat for the open and global structure of the Internet, argues Daniel Castro, Vice President of the Information Technology and Innovation Foundation and Director of the Center for Data Innovation
The Digital Post: Newly adopted German rules for government cloud computing means official data can only be processed in Germany. What is your opinion?
Daniel Castro: This is an unfortunate development, both for Germany and for others. First, countries like Germany should be an ally in support of free trade, and by enacting these types of non-tariff barriers to trade it gives cover to other countries who want to enact protectionist measures.
Second, by restricting access to foreign cloud providers, Germany is “cutting off its nose to spite its face.” Germany organizations benefit from having access to the best cloud providers, and many of these are foreign companies. This will raise costs and decrease productivity for affected organizations.
Third, there is little real benefit in terms of privacy and security to storing data within the country versus abroad. Countries should be working to clarify any distinctions. This is one reason my think tank has called for a “Geneva Convention on the Status of Data” to determine when government agencies can lawfully request access to data.
Most developed countries should be able to agree to common standards and abide by them. The end goal should be a data free trade zone that extends globally.
The Digital Post: Ever since the Snowden revelations came out, German PM Angela Merkel has been advocating for a separate European communication network/infrastructure. What might be the implications of such project, if it ever is implemented?
Daniel Castro: United States and Europe are allies on many issues, and it would be counterproductive to build separate infrastructure rather than working together towards a common goal.
Neither wants the other to spy on them, so they should be able to come to terms to upgrade the infrastructure we already share.
The greater threat to both U.S. and German interests are from China, so there is an opportunity to put aside past issues and come together to confront a looming issue.
The Digital Post: You often speak about the rise of “data nationalism” across the world. What is this phenomenon about?
Daniel Castro: Many countries are trying to pass laws and regulations to keep data within their borders, such as by requiring data to be processed locally. One reason countries are doing this is because they believe it will help create jobs, such as construction jobs for data centers.
But the net impact is very negative, as it raises the cost of doing business for the rest of the economy, and many businesses are increasingly dependent on cloud infrastructure. Moreover, some rules limit cross-border data flows which means a multinational company will run into serious issues as it tries to operate on a global scale.
The Digital Post: Is data nationalism a threat to the current structure and functioning of the Internet? Why?
Daniel Castro: Yes. The primary benefit of the Internet is that it is a global, open network available to all. Protectionist policies can chip away at this ideal until we are eventually left with a series of disjointed national or regional Internets.
Policymakers should be very concerned about overreacting to short-term fears about data privacy at the expense of damaging the potential growth of data-driven innovation in the Internet economy.
photo credits: grinwithoutacat
In the overflow of ongoing debates about security and privacy in Europe, the discussion over an EU wide passenger name record system (PNR) is most telling of Europeans’ data-schizophrenia. But interestingly enough, going beyond the PNR controversy, could also propel Europe into a new era of security, prosperity and privacy.
It’s always a strange feeling, and it happens more often than not within the EU. From the moment you walk into the airport to the moment you exit your arrival terminal, it may happen that no one has asked you at a single time during your journey to show an ID.
You scanned your QR code to access the departure gates, you scanned it again to go through security and then to board, you flew and crossed one or multiple borders, you exited the plane, walked through baggage claim without stopping, and then customs, and you are in another country.No one ever asked you for an ID, and when flying, it is always a strange thought to have that the passenger sitting next to you might not be the person whose name is on the ticket.
Reality, though, is much different. Passenger information collected by airlines is accessible to law enforcement authorities who can ‘pull’ names of suspected terrorists or criminals. But suspects need to already be on a list for the system to work impermeably.
With rising concerns throughout Europe that terror attacks may be the act of EU citizens radicalized and trained abroad, many argue for a system that would allow drawing patterns and pre-empting the worse from happening.
It was in this spirit that an EU-U.S. PNR agreement was adopted as soon as 2007, providing a framework for the transfer of EU air passengers’ personal data to the US authorities. With such a system, airlines ‘push’ passenger information to law enforcement authorities, who are then able to identify ‘unknown’ suspects.
Put bluntly: after analysis of their data (travel dates, itinerary, baggage and payment information, etc.) passengers can become suspects; they get on the list.
Understandably, for security purposes, such a tool can be very useful. It makes sense between allies, and the EU has signed such agreements with Canada and Australia too. It makes even more sense between EU countries; something that has yet to be achieved.
But understandably as well, such tool raises a number of questions when it comes to the type of data collected, the authorities who will have access to it, the duration of retention of the data, and the risks such profiling could present to individual fundamental rights.
In the aftermaths of the Paris terror attacks of January 2015, it made renewed sense in Europe to push for a single EU wide PNR. While member states can perfectly design their own systems – and some already have – such fragmentation defeats the purpose of ensuring the same high level of security and privacy to all EU citizens in all EU countries.
In a resolution that put an end to a long standing stalemate between EU institutions, members of the European Parliament called in February for progress to be made by the Commission and member states on issues related to data retention and data protection, with a view to adopt an EU PNR legislation by the end of the year.
Reform of data protection rules and adoption of a necessary and proportionate PNR system should therefore be done in parallel. Such conditionality is risky on many levels. But it could also give a much awaited boost to discussions in Europe on data retention and data privacy.
In addition, finding the right balance in the EU PNR legislation could prove valuable for negotiations with partner countries. On April 1st, Mexico could have started imposing fines of up to $30,000 per flight to European airlines if it was not given sufficient guarantees that an EU-Mexico PNR agreement would soon see the light of day.
The EU agreed to do so, and the deadline was pushed back to July 1st. Mexico’s muscle flex is not surprising in the current global security context. Other countries, such as Japan and Korea, have asked for the same in past, and could potentially resort to the Mexican precedent.
As it now stands, the debate over PNR in Europe reaches far beyond the borders of the EU. It reaches to the core fundamentals of both individuals and nations: liberty and security. “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety”, cautioned Benjamin Franklin.
Perhaps, but when it comes to personal information, the assumption of an inevitable trade-off between security and privacy should not be the norm.
The mere idea of such a trade-off makes it virtually impossible for policymakers or citizens to move forward constructively.The EU should on the contrary work towards increasing collective security by raising the standards in the protection of privacy. And the parallel discussions over PNR provide an ideal framework to do so before the end of the year.
We still may not know who is really sitting next to us on the plane after that, but someone definitely will.
photo credits: Jessica Keating Photography
The Safe Harbour agreement is not the appropriate instrument to solve transatlantic tensions over government mass surveillance in the US. The issue should be addressed separately from the US-EU commercial agreement regulating data transfer, whose suspension would leave companies in the middle of a jurisdictional conflict they cannot themselves resolve.
The EU-US Safe Harbour agreement has been the subject of a great deal of interest in recent weeks. At the end of March the head of the Article 29 Working Party, which represents Europe’s data protection authorities, raised the subject in the context of mass surveillance of private data by US security agencies in front of the European Parliament’s Civil Liberties (LIBE) Committee.
At the same time Justice Commissioner Vera Jourova announced that she intends to conclude a revision of safe harbor with her US counterparts at the end of May. The debate is set to intensify this month as negotiators count down to the self-imposed deadline for revising the 14-year old bilateral agreement.
Amid all this attention it is worth pointing out a few things about Safe Harbour that have been overlooked in much of the media coverage of the subject, and to explain why it is so important to revise rather than suspend the mechanism.
The EU-US Safe Harbour agreement facilitates transatlantic transfers of commercial data by European and US companies of all sizes. It is a vital tool for a wide range of industries engaged in the trade in goods and services between the EU and the US.
The agreement needs to be refreshed and we support the efforts of the European Commission to improve it. We are confident that the reform of Safe Harbour can be achieved through political discussions between the two trading partners.
While respecting citizens’ right to privacy, we believe an improved Safe Harbour agreement must continue to facilitate data transfers conducted by law-abiding companies.
Any suspension of Safe Harbour would affect American and European companies alike, and it would be especially burdensome for small and medium size enterprises that use the mechanism for data transfers to the US.
A suspension would clog up perfectly legitimate, non-controversial, safe flows of non-personal as well as personal data, and it would therefore have significant economic consequences for the US and the EU.
Similarly, if national data protection authorities were empowered to override EU level agreements such as Safe Harbour, as suggested by some national data protection authorities last month during a hearing at the Court of Justice of the EU (CJEU), this would lead to the splintering of EU rules on international data transfers.
This in turn would undermine efforts to create a digital single market, and instead create even more fragmentation and legal uncertainty within the EU than there is today. At the heart of the case being heard in court last month is the issue of protection of a citizen’s private data from US security agency surveillance.
The tech industry in the US has joined forces with privacy groups in opposing efforts to extend bulk surveillance by US security agencies. In Europe we have been criticised by European security agencies for placing too high a priority on citizens’ privacy.
DIGITALEUROPE shares the concerns of the public and opposes the bulk collection of citizen’s data by state security agencies. However, the Safe Harbour agreement is not the appropriate instrument to solve this problem. Isabelle Falque-Pierrotin, Chair of the Article 29 Working Party said as much at a meeting with the European Parliament‘s LIBE Committee at the end of March.
Attempting to solve the problem through the revision of Safe Harbour would only deflect attention from the real discussions that need to occur.
It requires direct government-to-government negotiations on the norms in cyber surveillance and access by authorities. It cannot be resolved in a commercial agreement, which would leave companies in the middle of a jurisdictional conflict they cannot themselves resolve.
We urge the European Commission, which leads the European negotiating team, to treat this task separately from the revision of rules to allow for the transfer of commercial data from Europe to the US. For more information please read our position paper on the Safe Harbour revision.
photo credits: Linda Tanner
The draft Data Protection Regulation (DPR), as it has been amended by the European Parliament, would seriously impair Europe’s competitiveness in medical research and innovation, which in turn will have a negative impact on health and wellness in the population.
The right to privacy and the consequent need for data protection is an aspiration of the great majority of people living in Europe. The draft Data Protection Regulation (DPR) is designed to provide the legislative framework for protecting peoples’ right to privacy and generally it does a very good job.
However, there is another right to which a great majority of citizens aspire.
[Tweet “The right to a healthy life, which implies the right of access to the best possible medical care”]
Excellent medical care is impossible without excellent medical research, which provides the new diagnostic instruments, drugs and other measures for keeping people healthy.
Balancing these two rights is critical if we are to protect individuals’ data, while avoiding harmful unintended consequences for research. As a practicing doctor and active medical researcher I believe there is a danger of that happening if the Parliament’s amendments to the DPR are adopted.
Medical research has increased the average European life span from around 40 to over 80 years in the 20th century. It has eliminated polio with vaccination; turned AIDS into a non-fatal disease; protected young women from cervical cancer; and cured stomach ulcers with antibiotics, eliminating the need for surgery.
The list is enormous. Medicine has also provided much employment in Europe in the last century.
Effective medical research requires a large number of different types of data, from varied sources, ranging from laboratories to hospitals to census information or medical records. Epidemiological and association studies often require very large data sets, based on information from thousands of people, to get the answers we need for disease prevention and to maintain “wellness”.
There is a tried and tested system for protecting the privacy of individuals. It is based primarily on local ethics committees that include lay representation and which work to an international standard expressed in the Helsinki Declaration.
In addition, peer review guarantees scientific validity and medical importance and eliminates unnecessary experimentation, or experimentation that cannot achieve a result to the question posed. This culture includes developing new methods to protect data, which are refined as science and technology progress.
Europeans view their health as being as important as their right to privacy. In a Eurobarometer survey from 2014, 40 per cent of respondents said that “treatments that work” are one of their criteria for high quality healthcare.
[Tweet “Health research is essential for discovering and testing better treatments”]
In Europe’s socialised health systems, the vast majority of people contribute to funding health care through taxes. Data collected during the care of an individual clearly belong to that individual.
But perhaps we should consider whether every citizen who pays for the health system also has a right for this rich information to be shared – safely and securely – to improve the health and wellbeing for all of us.
At present, the legal framework and the culture of safety in medical research respects the balance between the right to privacy and the right to health. That important balance will be maintained if the final version of the Regulation retains the substance of the medical research exceptions proposed in the initial Commission draft.
I believe that the adoption of the Parliament’s amendments to the Regulation would seriously impair medical research by preventing some studies and creating ambiguous rules and unwieldy, bureaucratic processes for others.
Europe’s competitiveness in medical research and innovation will be blunted, impacting negatively on opportunities for job and wealth creation, which in turn will have a negative impact on health and wellness in the population. This is a vicious circle that must be avoided.
This post was originally published on the website of the European Data in Health Research Alliance (www.datasaveslives.eu / @datamattersEU)
On 25 January the proposed EU General Data Protection Regulation is turning three. The irony is that it won’t come into force before it is six. That is, of course, the best-case scenario.
A compromise between EU governments is unlikely to be concluded before the end of the year, or even later. After that, the Council must reach consensus with the European Parliament on the wording of the final text. A lot of time may have passed by then.
Much more time than Viviane Reding could have imagined when she unveiled the proposal in 2012. What’s more, the wait won’t be over. Once adopted, there is expected to be a two year transition period before the regulation takes effect.
In the meantime many things are likely to change under the skies of Europe. The regulation has been proposed in the wake of the NSA spy scandals, with most EU leaders calling for stricter privacy rules in order to appease their public opinions. However, in the aftermath of Charlie Hebdo shooting the political agenda in many EU countries seems to be shifting from privacy worries to surveillance efforts.
On the other hand the rise of new technologies, such as the Internet of Things or Big Data, may present challenges to privacy that will require fresh legislative intervention.
In short, when in 2018 – or even later – the regulation enters into force, it risks to be out of touch with the reality, and partially ineffective. European paradoxes at their best.
How big is the divide between the United States and Europe when we talk about data protection and cybersecurity? And what is at the basis of the current differences between the two regional players? Is it just Snowden and the NSA, or is it a deeper issue?
I had the privilege to contribute to the European perspective among a large group of experts attending an interesting exchange about this in Washington DC. What was supposed to be a conference on cybersecurity policy and regulation became an exchange on privacy and data.
It is difficult for the EU and the US to work together in the field of online and data security as long as we have those other open quarrels on privacy protection and the rules applicable to transatlantic transfer of data.
I was a bit surprised to realize that the whole data protection regulatory approach in the EU is observed with a mix of admiration and respect by US experts. And -interesting enough-, it is our model which is in the way of becoming a world standard among democracies.
Boit systems are clearly different: we have a structured piece of legislation on privacy in Europe (currently in full revision), with clear definitions of privacy related data, and clear rules about the rights and obligations related to the use of those data; all with clear authorities responsible of enforcing those measures.
In the US, the legal protection of online privacy results of a diversity of legal instruments, enforced by different agencies and authorities. This is combined with the importance given to self regulation by companies.
But is the demand for online privacy by citizens that different in Europe and the US? No, as a matter of fact it isn’t. Americans do want their privacy protected as well.
What is radically distinct among “us” and among “them” can be reduced to a word: trust.
[Tweet “Europeans do not trust their government’s management of data and will not give them a blank check”]
The Stasi, Ceaucescu, and many other personal experiences of authoritarian invasion of private life have taken their toll in the public perception of the risks of abuse of personal data. The US public does not have that memory. It is not a perceived risk. Definitely not in the mainstream public opinion.
And what about private companies? What part of Europeans’ mistrust against Facebook or Google is addressed to those two companies and their privacy policies, or is addressed to their potential collaboration with the US Government? Difficult to say.
But what appears to be clear is that a huge amount of Europeans request from their public authorities, including their European legislators and the European Commission, to assume an active role in protecting them from this external potential intrusion.
An intrusion which, in the public perception, comes from the United States: in part from its Government, in part from its huge corporations which control the largest part of our online digital life.
If this is true, then the current transatlantic difficulties regarding online privacy require a social approach, must deal with this citizen’s mistrust, and are not just a matter of technical negotiation between experts or burocrats on both sides of the Pond.
The matter will only be solved if this public trust is reinforced. US companies have a lot to lose if the transatlantic flow of private data is halted; if the “safe harbour” scheme -which currently regulates in which cases private data originated in Europe can be transferred to and filed in the US- is interrupted.
But we know well that this scheme is not working, and the threat to annul it is real (and it may be the Court of Justice who annuls it in the fist place). The Commission -under huge pressure from the parliament- is negotiating this with the US.
But this is not just a legalistic issue to be solved like a trade negotiation of battery standards. This is a problem with deep social roots. The sooner American decision makers -in Congress, in Government-, understand this; the more possible it will be to rebuild the indispensable trust on the part of Europeans.
And only with that trust in place we will be able to work together, US and EU, in the search of common answers to the essential common threats to our online and digital security.