• Data Economy

    Why the EU struggle to regulate data protection should continue

    All parties and stakeholders should continue to work hand in hand for high data protection's standards all over Europe and generate the trust that is needed to reap the benefits that the digital revolution can provide. The biggest lie on the Internet is [read more]
    byFiona Fanning | 01/Feb/20175 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    All parties and stakeholders should continue to work hand in hand for high data protection’s standards all over Europe and generate the trust that is needed to reap the benefits that the digital revolution can provide.

    The biggest lie on the Internet is ‘I have read and understand the Terms and Conditions’. At best one briefly scans a document that would otherwise make for a long and tedious read in legalese, especially for a non-English speaker.

    In truth, no one really reads the fine print. To be perfectly blunt, who has the time – or desire – to ponder over a lengthy legal document in order to obtain access to a service or app?

    Yet service providers continually ply us with their increasingly invasive privacy policy conditions. We are left with no choice but to accept them. The options are clear: ‘take it or leave it’. This is particularly dangerous in the realm of e-government, e-banking and e-commerce.

    Users of these services often have no other alternative. But by accepting their terms, they weaken the control they have over their own data. It is unclear whether these conditions are always lawful and proportional.

    Furthermore, users are obliged to accept regular updates. Previously, one had the option of installing them or not, but not anymore.

    These obligatory updates occasionally lead to critical problems and after an update, users must verify their privacy settings, as changes can be made without explicit notification. To make matters worse, public authorities sometimes ask us to use these technologies to interact with them.

    Actions can and should be taken to protect European users. New ICT technologies should guarantee the privacy of potential users prior to their introduction. Effective privacy enforcement should be guaranteed by demanding privacy by design and fostered by mechanisms that prevent the unnecessary collection of data.

    The handling of personal data should be more transparent. Companies should collaborate on these issues, and regulation should define what minimum level of security is reasonable.

    In addition, appropriate levels of security should be insured by the reliable implementation of updates. New data protection mechanisms should also be introduced to prevent the domination of major service providers’ stringent privacy policy conditions.

    A number of alternative approaches are possible.

    Prior to the introduction of new operating systems, services and applications, a certificate of conformity as proof of compliance with the EU General Data Protection Regulation and national Data Protection Acts could be required. A permanent independent group of experts could be established to execute mandatory checks.

    Service providers could adopt a more preventive approach. The existing opt-out approach could be replaced with an opt-in model, whereby the transfer of personal data is explicitly authorised by the user and default settings initially prevent such a transfer.

    Service providers could clearly inform users what data is transmitted and guarantee that none will be without their explicit authorisation. They should also ensure that third parties cannot obtain this data.

    The European Commission’s recent proposal to introduce new legislation to guarantee privacy in electronic communications is a step in the right direction.

    But all parties and stakeholders should work hand in hand to protect consumers and companies and generate the trust that is needed to reap the benefits that the digital revolution can provide. Together let us stop the biggest lie on the Internet.

    Read the CEPIS Statement “Critical technological dependency requires a revised privacy policy of major service providers

     

    Picture credits: InsideOut Project

     

    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    Why Privacy Shield is safe from legal challenges

    The Commission is convinced that the Privacy Shield lives up to the requirements set out by the European Court of Justice, says Christian Wigand, EC spokesperson for Justice. The Digital Post: Despite the reassuring statements of the European Commission [read more]
    byThe Digital Post | 28/Nov/20165 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    The Commission is convinced that the Privacy Shield lives up to the requirements set out by the European Court of Justice, says Christian Wigand, EC spokesperson for Justice.

    The Digital Post: Despite the reassuring statements of the European Commission, the new “Safe Harbour” does not seem out of danger. Is the Privacy Shield enough strong to resist any future attempt to challenge its legal legitimacy?

    Christian Wigand: As we have said from the beginning, the Commission is convinced that the Privacy Shield lives up to the requirements set out by the European Court of Justice, which have been the basis for the negotiations. We used the ECJ ruling as a “benchmark” in the final phase of the negotiations, let me explain how three key requirements have been addressed:

    – The European Court of Justice required limitations for access to personal data for national security purposes and the availability of independent oversight and redress mechanisms.

    The U.S. ruled out indiscriminate mass surveillance on the personal data transferred to the US under this arrangement and for the first time, has given written commitments in this respect to the EU. For complaints on possible access by national intelligence authorities, a new Ombudsperson will be set up, independent from the intelligence services.

    – The Court required a regular review of the adequacy decisions.

    There will be an annual joint review to regularly review the functioning of the arrangement, which will also include the issue of national security access.

    – The Court required that all individual complaints about the way U.S. companies process their personal data are investigated and resolved.

    There will be a number of ways to address complaints, starting with dispute resolution by the company and free of charge alternative dispute resolution solutions. Citizens can also go to the Data protection authorities who will work together with the Federal Trade Commission to ensure that complaints by EU citizens are investigated and resolved. If a case is not resolved by any of the other means, as a last resort there will be an arbitration mechanism. Redress possibility in the area of national security for EU citizens’ will be handled by an Ombudsman independent from the US intelligence services

     

    TDP: Three months ago French Interior Minister Bernard Cazeneuve and his German counterpart, Thomas de Maizière, called on the EU to adopt a law that would require apps companies to make encrypted messages available to law enforcement.  What is the official position of the Commission on this particular issue? Is the Commission working on a proposal?

    CW: Encryption is widely recognised as an essential tool for security and trust in open networks. It can play a crucial role, together with other measures, to protect information, including personal data, hence reducing the impact of data breaches and security incidents. However, the use of encryption should not prevent competent authorities from safeguarding important public interests in accordance with the procedures, conditions and safeguards set forth by law.

    The current Data Protection Directive (which also applies to the so-called over-the-top service providers such as WhatsApp or Skype) allows Member States to restrict the scope of certain data protection rights where necessary and proportionate to, for instance, safeguard national security, and the prevention, investigation, detection and prosecution of criminal offences.
    The new General Data Protection Regulation (which will apply as from 25 May 2018) maintains these restrictions.

     

    TDP: According to a survey published recently by Dell most firms are unprepared for the EU’s General Data Protection Regulations less than 18 months before it enters into force. Are you worried about that?

    CW: To make the new data protection rules work in practice is a priority for us and we work closely with all stakeholders on that. The European Commission has set out a number of measures to make sure that companies operating in the European Union as well as national regulators will be ready for the new rules. There is work ongoing on all levels, with data protection authorities, industry representatives, data protection experts from Member States and of course national governments. For example, there are monthly meetings with Member States authorities on implementation. At the same time we are setting up a network between the Commission and national authorities to exchange information on the implementation of the Regulation and to share good practices.

     

    Picture credits: U.S. Army
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    “Oh … but people don’t care about privacy”

    It wont be long before we look back and laugh at the way we approached privacy in the happy days of the web.   If only I had a penny for every time I’ve heard this aphorism! True, most typology studies out there as well as our own experience [read more]
    byNikolaos Laoutaris | 20/Oct/20163 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    It wont be long before we look back and laugh at the way we approached privacy in the happy days of the web.

     

    If only I had a penny for every time I’ve heard this aphorism!

    True, most typology studies out there as well as our own experiences verify that currently most of us act like the kids that rush to the table and grab the candy in the classic delayed gratification marshmallow experiment: convenience rules over our privacy concerns.

    But nothing is written in stone about this. Given enough information and some time to digest it, even greedy kids learn. Just take a look at some other things we didn’t use to care about.

     

    Airport security

    Never had the pleasure of walking directly into a plane without a security check but from what I hear there was a time that this was how it worked. You would show up at the airport with ticket at hand. The check-in assistant would verify that your name is on the list and check your id. Then you would just walk past the security officer and go directly to the boarding gate. Simple as that.

    Then came hijackers and ruined everything. Between 1968 and 1972, hijackers took over a commercial aircraft every other week, on average. So long with speedy boarding and farewell to smoking on planes 20 years later. If you want to get nostalgic, here you go:

     

    Smoking

    nl1

    Since we are in the topic of smoking and given that lots of privacy concerns are caused by personal data collection practices in online advertising I cannot avoid thinking of Betty and Don Draper with cigarettes at hand at work, in the car, or even at home with the kids.

    To be honest I don’t have to go as far as the Mad Men heroes to draw examples. I am pretty, pretty, pretty sure I’ve seen some of this in real life.

     

    Dangerous toys

    Where do I start here? I could list some of my own but they are nowhere near as fun as some that I discovered with a quick search around the web. Things like:

    – Glass blowing kit

    – Lead casting kit

    – Working electric power tools for kids

    – The kerosine train

    – Magic killer guns that impress, burn, or knock down your friends.

    nl2
    Power tools for junior

     

     

     

     

     

     

     

     

     

     

     

     

    Pictures are louder than words. Just take a look at The 8 Most Wildly Irresponsible Vintage Toys. Last in this list is the “Atomic Energy Lab” which brings us to:

     

    Recreational uses of radio active materials

    I love micro-mechanics and there’s nothing more lovable about it than mechanical watches. There is a magic in listening to the ticking sound of a mechanical movement while observing the seconds hand sweep smoothly above the dial. You can even do it the dark because modern watches use super luminova to illuminate watch dial markings and hands.

    But it was not always like that. Before super luminova watches used Tritium and before that … Radium.

    Swiss Military Watch Commander

     

     

     

     

     

     

     

     

     

     

    Radium watch hands under ultraviolet light

     

     

     

     

     

     

     

     

     

     

     

     

     

    I am stretching dangerously beyond my field here but from what I gather, Tritium, a radio-active material, needs to be handled very carefully. Radium is downright dangerous. I mean “you are going to die” dangerous. Just read a bit about what happened to the “Radium Girls” who used to apply radium on watch dials in an assembly line in the ’20s.

    Radium girls
    Radium girls

     

     

     

     

     

     

     

     

     

     

    But we are not done yet. Remember the title of the section is “Recreational uses of radio active materials”. Watch dials are just the tip of the iceberg. It’s more of a useful than a recreational thing to be able to read the time in the dark (with some exceptions). Could society stomach the dangers for workers? Who knows? It doesn’t really matter because there are these other uses, that were truly recreational (in the beginning at least) for which I hope the answer is pretty clear. Here goes the list:Here

    – Radium chocolate
    – Radium water
    – Radium toothpaste
    – Radium spa

    Radium Schokolade
    Radium Schokolade

     

     

     

     

     

     

     

     

     

     

     

     

     

    Details and imagery at 9 Ways People Used Radium Before We Understood the Risks.

    Anyhow, I can go on for hours on this, talk about car safety belts, car seat headrests, balconies, furniture design etc but I think where I am getting at is clear: Societies evolve.

    It takes some time and some pain but they evolve. Especially in our time with the ease at which information spreads, they evolve fast. Mark my words, it wont be long before we look back and laugh at the way we approached privacy in the happy days of the web.

    nl7

     

     

     

     

     

     

     

     

     

     

     

    Credits: JOHN LLOYD

     

    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    The EU data landscape: Striking the balance between regulation and innovation

    In the context of the 7th annual EuroCloud Forum, which takes place from 5-6 October in Bucharest, Romania, Elena Zvarici, executive board member of EuroCloud Europe, talks about how Europe can take advantage of cloud computing and the data economy. In o [read more]
    byElena Zvarici | 30/Sep/20164 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    In the context of the 7th annual EuroCloud Forum, which takes place from 5-6 October in Bucharest, Romania, Elena Zvarici, executive board member of EuroCloud Europe, talks about how Europe can take advantage of cloud computing and the data economy.

    In order for Europe to take full advantage of cloud computing and the data economy, we need to strike the right balance between regulation and innovation

    In the digital world the balancing act between business and regulation is a delicate one. In the past year we have seen the adoption of the new European General Data Protection Regulation, the invalidation of the Safe Harbour agreement for transatlantic data transfers and problematic discussions around its replacement the Privacy Shield.

    Setting these developments into the context of the many ongoing initiatives at EU level aimed at encouraging innovation and the data economy, it is clear that getting the balance right is no easy task.

    Europe is leading the way in data privacy and advocates a high level of data protection worldwide. The newly adopted General Data Protection Regulation introduces a new concept of responsibility towards data ownership, as well as new legal obligations for businesses to comply. For cloud SMEs and start-ups, getting up to speed can be problematic and they will need help.

    A coordinated approach is needed between data protection authorities, policy makers and industry, in order to help organizations in this transition, by providing adequate data breach reporting tools, compliance toolkits and publicising the key issues. Let’s make sure that European SMEs and start-ups, so often the drivers of growth in Europe, are well placed to comply.

    While the GDPR provides a high level of data protection we must remember that we are ever more connected through digital means and cannot think solely in terms of Europe. We are global users and exporters of digital services and need to have a strong cloud computing and data economy to be competitive. International data flows will play a key part in this. To avoid regulation clashes and to create international data-driven markets, in the future we should strive towards the creation of uniform, accepted standards of personal data protection on a global basis.

    The recent agreement on the Privacy Shield for EU-US data transfers did not come a moment too soon and will hopefully bring the much needed legal certainty for the approximately 4,000 businesses who made use of the safe harbour mechanism. This legal assurance is vital. Many of these companies will rely on global information exchanges. Let’s hope that the provisions in the Privacy Shield can provide a robust enough framework to encourage data flows while providing high standards of data protection.

    Global data flows are vital to international trade and economic growth and the European Commission Initiative on the free flow of data, expected at the end of 2016, should aim to enable European companies, particularly in the growing cloud computing sector, to be in the forefront of the global innovation race.

    The Initiative should aim to reinforce the European cloud sector, so that companies are encouraged to develop new innovative services in the cloud, sell their services cross-border and enter the global market as exporters of technology.

    This can be done by providing clarity on issues such as data ownership, liability arising from data use and data localisation across Europe.

    If we really want to position Europe as a global leader in the data economy we need to ensure that we get the balance right. This means ensuring high levels of privacy while fostering new business innovation in sectors that rely on data and developing trust and confidence among users, from the individual consumer to the public and private sector.

    Now is the time to move forward and encourage Europe to reap the benefits of data and the cloud.

     

    Picture credits: Roberto Sartori
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    Why we can’t afford another legal blow to EU-US data flows

    If Standard Contractual Clauses (SCCs) suffer the same fate as Safe Harbour then transferring data to the US will in practice become almost impossible, further threatening to balkanize the Internet and to undermine international trade.   Eight mo [read more]
    byJohn Higgins | 31/May/20165 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    If Standard Contractual Clauses (SCCs) suffer the same fate as Safe Harbour then transferring data to the US will in practice become almost impossible, further threatening to balkanize the Internet and to undermine international trade.

     

    Eight months ago the Financial Times warned in an editorial that a ruling by the Court of Justice of the European Union (CJEU) to invalidate Safe Harbour, a commonly used legal mechanism for transferring data to the US, threatened to balkanize the Internet and undermine international trade.

    That threat deepened sharply last week when Ireland’s top data protection authority, the Irish Data Protection Commission, announced it would refer another legal mechanism, Standard Contractual Clauses (SCCs) to the courts too.

    After Safe Harbour was invalidated companies that need to transfer data as part of their day-to-day activities scrambled to find other legal methods to allow them to continue. One such method is the Standard Contractual Clause.

    If SCCs suffer the same fate as Safe Harbour then transferring data to the US will in practice become almost impossible.

    But it’s not just transatlantic data flows that are being called into question. Companies use SCCs to transfer data all over the world.

    If Europe’s courts conclude that SCCs are no safer than Safe Harbour this could effectively cut Europe out of the emerging global data economy, and that would hurt companies from almost every corner of the economy – not just the tech sector.

    Global data flows are vital to international trade. Forcing companies to store their data within Europe will have serious implications for Europe’s economic prospects.

    As the European Data Protection Supervisor, Giovanni Buttarelli himself said last week, it is unreasonable to ask companies to reinvent their practises all the time.

    I would urge Europe’s data protection authorities to stop shifting the legal goal posts for international data transfers and to wait until Safe Harbour’s intended replacement, the Privacy Shield, has been given a chance to work.

    The Privacy Shield, with its Ombudsperson role, would address the key concerns about EU citizens’ potential exposure to unwarranted surveillance by US security agencies.

    Privacy activists have dismissed the Privacy Shield before it’s even been given a chance to work. Jumping to a negative conclusion when so much is at stake seems rather reckless.

    Right now we need more legal certainty, not less. Give Privacy Shield a chance. If necessary make fixes once it’s in place but don’t throw companies into a legal black hole by closing down all options for international data transfers.

     

    Picture credits: Devin Poolman
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    Julie Brill: Why the EU-U.S. Privacy Shield will work

    The Digital Post speaks with FTC Commissioner Julie Brill about the new 'Safe Harbour', the implications of the EU privacy reform, and privacy issues arising from the boom of the Internet of Thing.   The Digital Post: The European Union and the U [read more]
    byThe Digital Post | 19/Feb/20169 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    The Digital Post speaks with FTC Commissioner Julie Brill about the new ‘Safe Harbour’, the implications of the EU privacy reform, and privacy issues arising from the boom of the Internet of Thing.

     

    The Digital Post: The European Union and the United States of America have reached an agreement on a new Safe Harbour data treaty. What are in your view the main achievements of the deal? What would have been the concrete risks if an agreement weren’t signed?

    Julie Brill: The main achievement of Privacy Shield is that it provides strong privacy protections for European consumers and creates a framework for more parties to engage in active supervision and stronger enforcement cooperation.  With respect to commercial data practices, Privacy Shield will provide stronger privacy protections than Safe Harbor did – through beefed up onward transfer requirements, and in other ways.

    Privacy Shield will also establish more active supervision of the program in practice, so that the Department of Commerce, the European Commission, European data protection authorities (DPAs), and the FTC can detect and address any issues that come up. Privacy Shield will also provide a well-defined process for consumers to complain about the data practices of Privacy Shield companies.

    The FTC will remain committed to giving priority to complaint referrals from DPAs, and there will be a better process in place for following up on these complaints.  And even in the absence of referrals from DPAs, the FTC will continue to aggressively look for violations of the Privacy Shield principles.

    Finally, in the area of national security, the United States agreed to take the unprecedented step of designating an ombudsperson to take complaints about surveillance activities that relate to Privacy Shield.  This is in addition to the significant reforms that Congress and President Obama have made to surveillance practices in the past few years.

    The risks if Privacy Shield hadn’t been agreed upon would have been that consumers and businesses would have continued in the limbo in which we currently exist, where some mechanisms to transfer personal data from the EU to the U.S. are still allowed, but they are expensive, opaque, and much more difficult for the FTC to enforce.

    Of course, Privacy Shield still has many steps to take before it receives approval.  If it were not approved, then companies – particularly small and medium enterprises – would lose out because of the time and resources that they have to put into alternative arrangements for data transfers.

    But consumers also would lose out because they would have far less transparency into which companies are handling their data, the rules governing data transfers, and where to go to complain if they believe their rights are not being respected.

     

    TDP: According to some observers, the new agreement won’t be sufficient to meet the concerns of the European Court of Justice. What is your opinion?

    JB:  It’s important to remember that the CJEU’s Schrems decision did not address national security surveillance practices in the United States. Rather, the case was based on the court’s concern that the European Commission’s adequacy decision in the year 2000 did not address U.S. privacy protections relating to national security surveillance.

    It is hard to say how the CJEU would have assessed a full, accurate record concerning surveillance practices and privacy protections in the United States, had those facts been before the court.  In any event, the U.S. has enacted significant reforms since the Schrems case was referred to the CJEU, and the U.S. is making further commitments through Privacy Shield.

    On the whole, I believe these protections meet the CJEU’s standard of “essential equivalence to the EU legal order”, but we will have to wait to see if Privacy Shield is challenged to know whether the CJEU agrees.

     

    TDP: Is the GDPR going to widen the chasm between EU and US regulatory approaches to data protection? How the FTC is working on this issue?

    JB:  The GDPR incorporates several provisions that either appeared first in the United States or are by now very familiar to companies and enforcers in the U.S.  Examples include a focus on reasonable data security through a continuing process of risk assessment and mitigation, a general security breach notification requirement, heightened protections for children, privacy by design, and a recognition that deidentification can reduce privacy and security risks.

    There are some differences between the European and U.S. versions of these provisions, but overall they show how developments in the U.S. can influence the direction that Europe takes.

    On the other hand, some provisions of the GDPR move further away from the U.S. approach.  A prime example is the GDPR’s right to be forgotten article, which extends to all data controllers.  This expansion is a sharp contrast to the very targeted and specific provisions of U.S. law that help individuals keep some information about themselves obscure.

    Companies and regulators on both sides of the Atlantic need to start working out answers to the many questions that the GDPR raises.  That’s one reason that I think it’s so important for us to move beyond the issues surrounding mechanisms for data transfers that have dominated the discussion for the past several months.

    With the announcement of an agreement on Privacy Shield in the past several weeks, I hope we now can begin to discuss the GDPR and issues like big data and the Internet of Things in a more sustained and meaningful way.

     

    TDP: The FTC has been focusing on privacy issues related to the booming sectors of Internet of Things and Big Data. What are the risks? How regulators should deal with this very sensitive issue?

    JB:  There are important roles for enforcement, policy development, and business and consumer guidance in the Internet of Things and Big Data ecosystems.  On the policy and guidance front, the FTC has been taking a close look at the potential benefits and risks of the Internet of Things and big data.

    We have hosted public workshops, taken public comments, and written key  reports on the broad range of technical and economic concerns that arise from having many more connected devices, huge volumes of personal data, and rapidly improving analytics.

    We heard a lot about the exciting possibilities to solve problems in health care, transportation, the environment, education, and other areas; but we also learned about significant risks.  Security is a huge challenge with the Internet of Things.

    Not only are many devices being offered by companies that do not have long track records with data security, but these devices are also being used in ways that collect highly sensitive information and create physical risks to consumers.

    With respect to big data, we found that there is a potential for unfairness or discrimination to enter through biases in data collection and analysis.  Some of these issues could get companies into trouble under fair lending, credit reporting, or other laws.  Other issues arise in settings that these laws do not cover, but companies still need to be aware of them because they may be deceptive or unfair.

    Enforcement also plays an important role in the FTC’s approach.  We have already brought enforcement actions relating to privacy and security violations with IoT devices.  We have the authority to stop unfair or deceptive practices – whether or not they involve new technologies and business practices – and we will use it in appropriate cases.

    Picture Credits: g4ll4is
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    Data protection, too many obligations

    The legislation agreed in mid-December by Parliament and Council negotiators marks a crucial step forward in getting away with a calamitous patchwork of national laws on data protection. However, it contains a number of inconsistencies that could negative [read more]
    byThe Digital Post | 25/Jan/20164 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    The legislation agreed in mid-December by Parliament and Council negotiators marks a crucial step forward in getting away with a calamitous patchwork of national laws on data protection. However, it contains a number of inconsistencies that could negatively affect Europe’s digital ambitions.

    It took nearly 4 years of bitter negotiations for the EU to strike an agreement on a sweeping overhaul of its data protection rules. But it was worth it. The legislation agreed in mid-December by Parliament and Council negotiators marks a crucial step forward in getting away with Europe’s calamitous patchwork of national laws on data protection.

    The previous EU rules dated back to 1995 and their varying interpretations by Member States have contributed to create significant regulatory uncertainty while hindering innovation in critical sectors of the economy.

    However, the new General Data Protection Regulation (GDPR) is far from perfect. It still presents multiple critical aspects. For instance, it fails to create a level playing field for telecom operators.

    Following its introduction, the electronic communications sector will be forced to abide by a twofold regulation, complying with both the new data protection legislation and the ePrivacy Directive.

    If Europe is serious about supporting growth and innovation in its digital markets, this asymmetry should be addressed as soon as possible. Otherwise it will place yet another burden on a sector which has been hit hard in recent years by a slow economic recovery while being under pressure to invest more in digital networks in order to meet the EU broadband targets.

    As many know, the on-going Internet evolution has been providing breeding grounds for several new telecom-like services (including OTT services) to grow.

    The point is that, unlike traditional telecom providers, such services are not necessarily bound by the terms of the ePrivacy Directive, although they are functionally equivalent to one another.

    As a consequence, different rules applying to equivalent services inevitably create unfair competition between telecom operators as well as legal uncertainty and general confusion among consumers.

    In order for consumers to benefit from a consistent regulation, regardless of the service provider in question, a prompt revision of the ePrivacy Directive is thus required.

    But the negative implications of the new regulation on data protection could be larger, stretching far beyond the telecoms sector.

    DigitalEurope, the main association representing the digital technology industry in Europe, believes that the legislation fails to strike the proper balance between protecting citizens’ fundamental rights to privacy and the ability for businesses in Europe to become more competitive.

    The text agreed upon between the European Commission, European Parliament and the Council of Ministers contains a number of stringent obligations that could be very costly for IT businesses, undermining their ability to invest, innovate and create jobs.

    European businesses, traditionally less equipped to meet these obligations, could be hit hard. And, of course, this is in stark contrast with Europe’s ambitions to create a generation of home-grown global leaders in the tech sector.

    Another matter of concern is the so-called is the compromise reached on the so-called “one-stop-shop”, according to which tech companies operating in different countries will deal with only one data-protection authority, namely where their European headquarter is based.

    As Member states managed to weaken this principle, as recently reported by Reuters, some obervers believe that this will create more legal confusion and litiges (for instance, to determine what is the concerned national authority). Again: the bill for the companies could be very expensive.

    Following the political agreement reached in trilogue, the final text of the data protection regulation will be formally adopted by the European Parliament and Council in a few weeks. Maybe there is still room to fix its inconsistencies.

     

    Photo credit: Martin Fisch
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    Cows, privacy, and tragedy of the commons on the web

    Web firms may have an interest in pursuing the monetization of users' data with some more moderation. If they don't, privacy concerns as well as adoption of tracking and advertisement blocking tools could grow to a point where innovation will suffer. As [read more]
    byNikolaos Laoutaris | 18/Jan/20168 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    Web firms may have an interest in pursuing the monetization of users’ data with some more moderation. If they don’t, privacy concerns as well as adoption of tracking and advertisement blocking tools could grow to a point where innovation will suffer.

    As part of a recent keynote during the inaugural workshop of the Data Transparency Lab (Nov 20, 2014, Barcelona) I hinted that a Tragedy of the Commons around privacy might be the greatest challenge and danger for the future sustainability of the web, and the business models that keep it going.

    With this post I would like to elaborate a bit more on what I meant and maybe explain why my slides are full of happy, innocent looking cows.

    What is the Tragedy of the Commons?

    According to Wikipedia:

    The tragedy of the commons is an economic theory by Garrett Hardin, which states that individuals acting independently and rationally according to each’s self-interest behave contrary to the best interests of the whole group by depleting some common resource. The term is taken from the title of an article written by Hardin in 1968, which is in turn based upon an essay by a Victorian economist on the effects of unregulated grazing on common land.

    In the classical Tragedy of the Commons, individual cattle farmers acting selfishly keep releasing more cows onto a common parcel of land despite knowing that a disproportionate number of cows will deplete the land of all grass and drive everyone out of business.

    All the farmers share this common knowledge, but do nothing to avoid the impending tragedy.

    Selfishness dictates that it is better for a farmer to reap the immediate benefit of having more cows, diverting the damage to others and/or pushing the consequences to the future.

    The utopian outcome for each farmer is that he can keep accumulating cows without having to face the tragedy because, miraculously, others will reduce the size of their herds, saving the field from becoming barren. Unfortunately, everyone thinks alike and thus, eventually the field is overgrazed to destruction.

    Are there cows on the Web?

    There are several.

    Not only in .jpeg, .gif or .tiff but also in other formats that, unlike the aforementioned compression standards, can lead to (non-grass related) tragedies. In my talk I am hinting on the following direct analogy between the aforementioned cow-related abstraction and the mounting concerns about privacy and the web.

    Farmer: A company having a business model around the monetization of personal information of users. This includes online advertising, recommendation, e-commerce, data aggregation for market analysis, etc.

    Cow:  A technology for tracking users online without their explicit consent or knowledge. Tracking cookies, analytics code in websites and browsers, browser and IP fingerprinting, etc.

    Grass:  The trust that we as individuals have on the web, or more accurately, our hope and expectation that the web and its free services are doing “more good than bad”.

    The main point here is that if the aforementioned business models (farmers) and technologies (cows) eat away user trust (grass) faster than its replenishment rate (free services that make us happy), then at some point the trust will be damaged beyond repair and users … will just abandon the web.

    As extreme as the last statement may sound, the reader needs to keep in mind that other immensely popular media have been dethroned in the past. Print newspapers are nowhere near when they used to be in, say, the 30’s.

    Broadcast television is nowhere near where it’s height in the 60’s (think the moon-landing, JFK’s assassination, etc.).

    The signs of quickly decaying trust on the web are already here.

    – More than 60% of web traffic was recently measured to be over encrypted HTTPs, and all reports agree that the trend is accelerating.

    –  AdBlock Plus is the #1 Firefox add-on in the Mozzilla download page with close to 20 million users. Other browser or mobile app marketplaces are heavily populated with anti-tracking add-ons and services.

    –  Mainstream press is increasingly covering the topic on front pages and prime time, sometimes revealing truly shocking news.

    –  Regulators on both sides of the Atlantic are mobilizing to address privacy related challenges.

    If ignored, the mounting concerns around online privacy and tracking on the web may lead to mass adoption of tracking and advertisement blocking tools. Removing advertising profits from the web probably means the end of free services that we currently take for granted.

    The impact on innovation will be a second negative consequence. Last, lets not forget that advertisement and recommendation is something desired by most users, provided that certain red lines are not crossed.

    What constitutes a red line may change from person to person but certain categories are safe candidates (health, sexual orientation, political beliefs).

    In a recent study we have shown that it is possible to detect Interest-based Behavioral Targeting (IBT) and have delved into specific categories to measure the amount of targeting that goes on.

    What can we do to avoid an online tragedy of the commons?

    “Sunlight is the best disinfectant”

    The famous quote of American Supreme Court litigator Louis Brandeis may have found yet another application in dealing with the privacy challenges of the web.

    Despite the buzz around the topic, the average citizen is in the dark when it comes to issues relating to how his personal information is gathered and used online without his explicit authorization.

    A few years ago we demonstrated that Price Discrimination seems to have already creeped into e-commerce. This means that the price that one see’s on his browser for a product or service may be different than the one observed at the same time by user in a different location.

    Even at the same location, the personal traits of a user, such as his browsing history, may impact on the obtained price.

    To permit users to test for themselves whether they are being subjected to price description we developed (the price) $heriff, a simple to use browser add-on that shows, in real time, how the price seen by a user compares with the prices seen by other users or fixed measurement proxies around the world.

    Researchers at Columbia University and Northeastern University have, in a similar spirit, developed tools and methodologies that permit end users to test whether the advertisements or recommendations they received have been specifically targeted at them, or they are just random or location dependent.

    Tools like $heriff and X-ray improve the transparency around online personal data. This has multifold benefits for all involved parties:

    – End users can exercise choice and decide for themselves whether they want to use ad blocking software and when.

    – Advertising and analytics companies can use the tools to self regulate and prove that they abstain from practices that most users find offending.

    – Regulators and policy makers can use the tools to obtain valuable data that point to the real problems and help in drafting the right type of regulation for a very challenging problem.

    Mooo, who needs more tragedy????

     

    Photo credit: b3d_
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    Bundes Cloud: Germany on the edge to discriminate against foreign suppliers of digital services

    Adam Smith said that the road to certainty passes through the valley of ambiguity – Germany’s stance on cross-border data flows is no exception. Germany is increasingly accused of being engaged in a digital protectionism, and commandeering the rest o [read more]
    byECIPE | 30/Nov/20155 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    Adam Smith said that the road to certainty passes through the valley of ambiguity – Germany’s stance on cross-border data flows is no exception.

    Germany is increasingly accused of being engaged in a digital protectionism, and commandeering the rest of Europe into policies aimed at ‘information sovereignty’ and counter the threat of the data-driven ‘Industrie 4.0’.

    While the politically important German telecom and publishing sector openly argue for a ‘data Schengen’ that would effectively push US competition out of Germany or Europe, the government has been more cautious, preferring to talk in ambiguous terms – not least because German exporters face such barriers overseas.

    Adam Smith said that the road to certainty passes through the valley of ambiguity – Germany’s stance on cross-border data flows is no exception.

    The federal government recently adopted a set of guidelines aiming to increase the ‘flexibility and security’ of its government-run IT systems.

    Germany’s 200 or so different government agencies run 1,300 data services centres, causing functional overlap and economic inefficiencies.

    The new proposal, drafted by Germany’s interior ministry, advocates the consolidation of government-run IT systems and IT services centres.

    So far, this is in good order. Efficiency and order – sound like good governance that we come to expect.However, the proposal is accompanied by a far-reaching move towards data localisation: for external cloud and software services to be purchased by Germany’s public authorities the government’s new guidelines (Resolution 2015/5 of the federal governments IT Council) stipulate that sensitive information (including government secrets and infrastructure information) have to be stored on servers within Germany.

    In addition, all suppliers of cloud and software services must guarantee that such information will not become subject to any disclosure obligation in foreign jurisdictions such as the United States.
     

    The last nail in the coffin

    At first sight, such requirements may sound reasonable in the post-Snowden environment; NSA was after all listening into the Chancellor’s phone calls.

    Also, a serious attack on the IT systems of the Bundestag caused parliamentarians to question government agencies’ cyber security competences.But such notions are built on a very common misconception that data security is a function of where the data is physically located.

    In contrary, centralising data in one country increases both the potential risk, but also the scale of the damage that hackers can cause.

    This is why the native tech industry in Europe advocates against such localisation policies. Data is not more secure because its IP address is in Germany, as it is accessible from any location in any case. It is simply the old saying about laying all the eggs in one basket.But what is aimed at just public institutions will inarguably spill over to the private markets as well. Government employees use same type of business software to draft their documents as common folks; government payroll and planning run on enterprise applications used in private businesses.Excluding certain vendors from government purchases will affect the profitability of these firms, and whether they continue to be present on the German market at all.The new proposal is an effective message to major vendors of software, storage and processing services: either to head for Germany or – bitte – leave.It probably goes without saying but the issue is not necessarily about imposing security requirements or imposing German law on federal data. The problem is how it is being done.Firstly, many countries (including the United States), determine where government data can be placed on a case-by-case basis.Unlike what is aimed for by Germany, government data is usually heavily decentralised, which allows for proportionate measures taken by each authority and each case.Not all data held by public authorities is indispensable to Germany’s national security, which brings the danger of arbitrariness on the part of the government and discrimination of foreign suppliers.If the German government centralises its servers, and thereby extends the localisation requirement through bundling sensitive information with other data, it may find itself in violation of its WTO commitments.Secondly, Germany imposes its law unilaterally on its data, i.e. independent from foreign jurisdictions and international law. Rather than tackling the issue directly with the culprit – the US government that unfairly exercises jurisdiction over its tech firms – the new German proposal is designed to make sure that German and US industry will be caught in the middle.You can only abide by one law, not both. Germany only cemented the precedence for, say, US, Chinese or Russian governments, to claim jurisdiction against German tech firms on more arbitrary grounds.To continue with old proverbs: Germany did not cast the first stone, but it most likely hammered the last nail in the coffin.

    Rather than fighting fire with fire by prosecuting business, Germany should exercise its moral higher ground to force other governments into a system built on mutual legal assistance – where governments are held accountable for their laws, not firms who try to abide by them. But it seems as the imperative of looking tough took priority over being effective.

     

    In the long term

    Many private firms increasingly or exclusively rely on cloud-based storage and data processing. According to a recent Eurostat survey, 19 per cent of European firms used cloud computing in 2014, primarily for email hosting and storage services.

    46 per cent of those companies used advanced cloud services including financial and accounting software applications, customer relationship management and other business applications.

    In general, a government-imposed limitation of vendor choices artificially restricts competition, incurs higher cost and prevents innovative business models from gaining ground and scale.

    Accordingly, data localisation destroys well-functioning digital business models, increases the risk of successful attacks due to data concentration, and undermines the international competitiveness of digital and traditional exporters – all of which is at the detriment of the German economy.

    This article was co-authored by Hosuk Lee-Makiyama and Matthias Bauer. Originally posted here

     

    photo credit: Erwin Brevis
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    Safe Harbour ruling: A fierce storm with no lighthouse in sight

    The new Safe Harbour ruling has shown the difficulties in adapting existing legal rules to the globalised, digital era. Online privacy legislation is clashing with modern business models, while European regulators are struggling to balance citizen rights [read more]
    byClaudia La Donna | 26/Oct/20153 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    The new Safe Harbour ruling has shown the difficulties in adapting existing legal rules to the globalised, digital era. Online privacy legislation is clashing with modern business models, while European regulators are struggling to balance citizen rights with the desire to boost the competitiveness of the tech industry.

     

    If someone were asked to guess which issue has generated the fiercest debate in Brussels in the recent months, the European Court of Justice (ECJ) Safe Harbour ruling should be the answer. Brussels has not stopped talking about it since the 6th October. Safe Harbour, which echoes a secure environment, no longer fits with the legal uncertainty and insecurity that the ruling has generated.

    The ECJ ruled that the transatlantic Safe Harbour agreement, which allows American companies to use a single standard for consumer privacy and data transfer of private information between the EU and the US, is invalid.

    With its ruling the ECJ has considerably challenged, if not disrupted this framework put in place to ease Trans-Atlantic information sharing, deeming it inadequate, especially in light of the surveillance allegations and scandals by USA intelligence services (including the NSA).

    The upshot of the ruling is that there are now only limited pan-EU rules on data flow from Europe to the USA.

    The ECJ has caused quite a stir in the tech world with its recent judgment. Tech companies, big and small, are scrambling to see what data they process and where it is transferred. Most multinationals are now legally obliged to suspend any transfer of its customers’ data to the USA and move their data storage and operations to an EU subsidiary.

    Has anyone also quantified the economic implications of a real stop of data transfer between the EU and the USA? A power-generated black out is the best example I can think of.

    The European Commission has therefore been put in a tough position. While it has to support the ruling by the European Court of Justice and guarantee citizens’ privacy, it had evoked the ire of the ICT industry. Trade and business associations are lobbying for a pragmatic solution namely via a transition period that would legalise the current Trans-Atlantic data flows.

    The Commission has also promised guidelines for companies and data processors by early November and is working together with the national authorities to prevent fragmentation. But industry fears that this will not prevent headaches, stress and costs. A German data protection authority, for instance, has already warned it would fine non-compliant companies severely.

    Meanwhile, Europe and the USA have also been negotiating a renewed Safe Harbour agreement. The ruling comes in the midst of these talks and will be an extra source of pressure. However, little can be done to accommodate the ruling unless America agrees to suspend its surveillance mechanisms on EU citizen data, which would be a very big ask.

    In summary, the new Safe Harbour ruling has shown the difficulties in adapting existing legal rules to the globalised, digital era. Online privacy legislation is clashing with modern business models, while European regulators try to balance citizen rights with the desire to boost its tech industry and remain competitive. It’s a fierce storm with no lighthouse in sight.

     

    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    Did the ECJ really kill the Safe Harbor?

    An in-depth look at the legal scenarios arising from the EU landmark ruling that declared invalid the EU-US Safe Harbor agreement on the transfer of personal data.   On October 6, 2015, the European Court of Justice (“ECJ”) ruled in the “ [read more]
    byIgnasi Guardans | 09/Oct/20159 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    An in-depth look at the legal scenarios arising from the EU landmark ruling that declared invalid the EU-US Safe Harbor agreement on the transfer of personal data.

     

    On October 6, 2015, the European Court of Justice (“ECJ”) ruled in the “Schrems” case that the U.S.-EU Safe Harbor framework on the transfer of personal data from Europe to the United States, was invalid.

    For the past 15 years, this Safe Harbor framework gave privileged status to U.S. companies, allowing for such entities to “self-certify” that they complied with privacy standards negotiated between the European Commission and the United States Department of Commerce under the Clinton Administration in 1999, and were viewed as “adequate” by the EU.

    Effective immediately, today’s ruling may force all of the 4,400 U.S. entities that currently rely on the Safe Harbor to access the data of their EU partners and subsidiaries, to seek alternate modes of data transfer or risk non-compliance with EU data protection requirements.

     

    THE FACTS
    Austrian privacy campaigner Maximilian Schrems originally formed his complaint before the Irish Data Protection Authority (“DPA”) against Facebook’s use of his data, and the transfer of data occurring between Facebook’s Ireland entity and its U.S. parent company.

    According to the complainant, and based on Edward Snowden’s revelations on mass surveillance, Facebook and other U.S. multinationals were, directly or indirectly, allowing U.S. national security agencies unrestricted access to EU citizens’ data.

    Such unrestricted access could be construed as being in violation of the fundamental rights granted under the EU Data Protection Directive 95/46 (the  “Data Directive”), currently under revision in the EU.

    After the Irish DPA declined to investigate such concerns on the basis that the Safe Harbor implemented between the U.S. and Irish entities was exclusively overseen by the European Commission, the complaint was elevated before Europe’s highest Court.

     

    THE DECISION
    The ECJ disagreed with the Irish DPA’s interpretation, by stating that the existing provision “does not prevent a supervisory authority of a Member State … from examining the claim of a person concerning the protection of his rights and freedoms in regard to the processing of personal data relating to him which has been transferred from a Member State to third country when that person contends that the law and practices in force in the third country do not ensure an adequate level of protection”.

    In essence, this means that each EU member state DPA has the authority to hear complaints about the level of protection for personal data that other countries offer, and potentially to second guess any determinations that the European Commission has made that those countries offer adequate protection.

    In addition, the Court noted that “legislation not providing for any possibility for an individual to pursue legal remedies in order to have access to personal data relating to him, or to obtain the rectification or erasure of such data, compromises the essence of the fundamental right to effective judicial protection, the existence of such a possibility being inherent in the existence of the rule of law”.

    Following the September 23 opinion of Yves Bot, the ECJ’s Advocate General for the case, which notably stated that “once personal data is transferred to the United States, the National Security Agency and other United States security agencies such as the Federal Bureau of Investigation are able to access it in the course of a mass and indiscriminate surveillance and interception of such data”, the Court invalidated the EU Commission decision 2000/520/EC of 26 July 2000 on the adequacy of the Safe Harbor framework to EU privacy standards.

     

    THE REACTIONS OF THE EU INSTITUTIONS
    The EC promptly reacted to the decision of the ECJ. In a press conference on the same day of the ruling, the First Vice-President of the EC, Frans Timmermans, and the Commissioner for Justice, Consumers and Gender Equality, Věra Jourová, explained how the EC is planning to tackle the issues raised by the Court.

    In particular, they clarified that the Commission has now three priorities, in light of the ECJ’s ruling: (i) guaranteeing that the data of EU citizens are protected when transferred across the Atlantic, (ii) ensuring that data flow continues, and (iii) ensuring the uniform response on alternative ways to transfer data across the EU.

    According to Commissioner Jourová, the data flow can continue under EU data protection rules which provide for other safeguard mechanisms for international transfers of personal data (e.g. standard data protection clauses in contracts between companies exchanging data across the Atlantic or corporate rules for transfers within a corporate group) and the derogations under which data can be transferred (i.e. performance of a contract, important public interest grounds, vital interest of the data subject, or consent of the individual).

    The EC is planning to provide clear guidance to national data protection authorities on how to deal with data transfer requests to the US, in light of the ruling, and will put relevant information and contact points on its website.

    The guidance should guarantee a uniform enforcement of the ruling and more legal certainty for citizens and businesses.

    The Chair of the European Parliament Civil Liberties Committee, Claude Moraes, has called for the immediate suspension of the Safe Harbor agreement, following the decision of the ECJ, and for its replacement by the Commission with a new framework for transfers of personal data to the US in compliance with EU law. The European Parliament had already advanced those requests more than once in the past.

     

    THE REACTION OF THE UNITED STATES DEPARTMENT OF COMMERCE
    The Secretary of the U.S. Department of Commerce, Penny Pritzker, promptly released a press release in response to the decision that expressed deep disappointment with the decision. The statement indicates that the decision “creates significant uncertainty for both U.S. and EU companies and consumers, and puts at risk the thriving transatlantic digital economy.” It further calls for the release of an updated Safe Harbor Framework “as soon as possible.”

    Secretary Pritzker’s statement also indicates that the U.S. is prepared to work with the European Commission to address the uncertainty that this decision causes for U.S. and EU businesses so that businesses that “have complied in good faith with the Safe Harbor and provided robust protection of EU citizens’ privacy in accordance with the Framework’s principles can continue to grow the world’s digital economy.”

     

    IMMEDIATE IMPACTS AND LONG-TERM CONSEQUENCES
    The ECJ decision will now be sent to the High Court in Dublin, in order for the national judge to use this new interpretative framework as a basis for deciding Schrems’ legal challenge for Facebook to be audited.

    While the ECJ decision is of immediate application, the practical effect in a B2C setting will actually depend on the actions of the DPAs in each European Union member state, and others.

    Meanwhile, public outrage may lead to a wave of complaints and possible requests for interim action, such as injunctions before national courts. Such initiatives may notably be undertaken by the likes of complainant and privacy activist Mr. Schrems, and others who follow his lead.

    Strictly speaking, only a decision from the European Commission has been invalidated — the Safe Harbor remains a voluntary mechanism adopted by the United States under the supervision of the U.S. Federal Trade Commission (“FTC”) or Department of Transportation (“DoT”).

    Accordingly, companies that have certified as compliant with the Safe Harbor are still subject to FTC or DoT jurisdiction, but compliance with the Safe Harbor Framework will no longer be assumed by European authorities to offer an adequate level of protection.

    The consequence of this ECJ decision lies in the fact that each national DPA now has the power to control the conformity of a data transfer not only to the Data Directive, but also to the Safe Harbor framework.

    Therefore, the compliance of the U.S. data importer with the Safe Harbor Framework may now be scrutinized by both the FTC and DoT (as before), and each local DPA.

    From a B2B point of view, this decision will, without doubt, disrupt the ongoing negotiations with European business customers, who might threaten to interrupt the delivery of goods or services and seek redress for noncompliance until their providers establish alternative grounds to transfer data to the United States in accordance with the requirements of the Data Directive.

     

    NEXT STEPS
    While the Safe Harbor certification of each U.S. entity may now be scrutinized by each local EU DPA, from an EU law perspective, alternate modes of data transfers, such as Data Transfer Agreements based on the EU Commission Model Clauses (a fixed contractual template regulating the transfer of data from one EU data exporter (or more) to a non-EU data importer (or more) or Binding Corporate Rules (“BCR”, an ad-hoc set of rules governing the processing of personal data within the various entities of a given group of companies), may still be relied upon.

    The BCR approach involves potential risks to both U.S. companies and European corporate affiliates, including the following:

    – If the Safe Harbor certification of a U.S. company is deemed invalid by a DPA, this European DPA may initiate sanctions against any EU exporter making data available to this U.S. data importer. If this U.S. data importer has no physical or commercial presence in EU territory, no sanction may be enforced against it by an EU DPA.

    – If, for the security of their data transfers from Europe, the U.S. importers execute Data Transfer Agreements with their EU counterparts, the joint-liability regime of the European Model Clauses will make the EU data exporter bear the whole of the actual liability.

    On the one hand, Model Clauses are easily executable, but do not provide much flexibility. In addition, their adoption involves legal risk due to their pass-through liability and audit requirements, and is not always feasible due to the need to execute clauses with any sub-processors that will have access to the personal data transferred.

    On the other hand, BCR are time consuming and potentially expensive to implement, but may offer a tailor-made solution for a given group of entities.

    U.S. companies should carefully explore the risks and benefits that data transfers using the Model Clause and BCR approaches offer, and may also wish to re-examine business practices to avoid exposure to the legal risks that transfers of personal data outside of the EU involves.

    A re-examination and change in data transfer practices could help mitigate the risks that the Model Clause and BCR approaches have under EU law, as well as potential risks that agreeing to European-style data protection expectations might have if tested in litigation in U.S. courts.

    The draft Data Protection Regulation currently being discussed in the EU appears to maintain both the Model Clause and BCR mechanisms, which also offer the advantage of regulating data transfers worldwide and not solely to the United States.

    We may reasonably doubt that the ECJ’s intention was to sanction EU companies that transfer data outside of the EU under the Safe Harbor framework. Notwithstanding, this may be the final outcome of its decision.

    There is little doubt that this decision will have a political impact, should the Obama administration elect to carry this issue forward within the Trans-Atlantic talks notably surrounding the adoption of the TTIP, once the draft Data Protection Regulation is adopted in the EU before the end of 2015.

     

    This article was originally published on K&L Gates Hub. Co-authors: Etienne Drouard, Samuel R. Castic and Claude-Étienne Armingaud

     

    photo credit: Simon Ingram
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    Ever heard about Hadoop? You should

    Although the word Hadoop may not ring a bell to you, it is probably one of the main reasons why you have heard – and continuously hear – buzzwords such as “Big Data” and “Internet of Things”. On April 15-16, The Digital Post had the pleasure [read more]
    byVasia Kalavri | 04/May/20156 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    Although the word Hadoop may not ring a bell to you, it is probably one of the main reasons why you have heard – and continuously hear – buzzwords such as “Big Data” and “Internet of Things”.

    On April 15-16, The Digital Post had the pleasure to attend the Hadoop Summit 2015 Europe, which took place in Brussels, at the SQUARE meeting center. Here are our impressions from the event.

     

    What is Hadoop?

    Hadoop is an open-source platform for distributed storage and processing of very large amounts of data.

    You might have never heard of it, but Hadoop is probably one of the main reasons why you have heard – and continuously hear – buzzwords such as “Big Data” and “Internet of Things”.

    In fact, Hadoop has become the main standard for big data processing,

    Data volumes are increasing. Ninety percent of the world’s data was created over the last two years, says a research from IBM. Essentially, Hadoop accomplishes two tasks: massive data storage and faster processing.

    On the one hand it enables to store bigger files than what can be stored on one particular node or server. On the other hand, it has the ability to process huge amounts of data, or at least (provide) a framework for processing that data.

    Even though Hadoop is nearly 10 years old, it has only recently started becoming popular in industry and it’s still quite far from being mainstream. However, the interest in Hadoop-related technologies is continuously increasing, Hadoop-related talent is trending, while highly skilful people in this area are highly valued and hard to find.

     

    The conference

    The Hadoop Summit is one of the biggest industrial conferences focusing on Apache Hadoop and similar big data technologies, organized by Hortonworks. Hadoop community members, users, programmers, industrial partners and researchers participate in this 2-day event to share their experiences and knowledge, seek Hadoop talent to hire, promote their products and do a lot of networking.

    The summit takes place both in North America and Europe. In fact, Hortonworks reported having about 15-20 percent of its business and employees in Europe.

    The event kicked off on Monday April 13 with several pre-conference events, such as trainings and affiliated meetups, followed by the main 2-day conference on April 15-16.

    Each main conference day started with quite lengthy keynotes, followed by talks in 6 parallel sessions, covering the following topics:

    – Committer track: technical presentations made by Apache Hadoop and related projects committers

    – Data science and Hadoop

    – Hadoop Governance, Security & Operations

    – Hadoop Access Engines

    – Applications of Hadoop and the Data Driven Business

    – The Future of Apache Hadoop

     

    We analysed the talk titles (abstracts) to find the most popular topics and not surprisingly, these mostly included the words Hadoop, Data, Apache and Analytics:

    HadoopConTags

    According to the organizers, there were 351 submissions from 163 organizations. Yet the diversity and variety in the agenda was rather limited. Almost 1 out 5 speakers came from Hortonworks (the organizers), while there were only 5 women among the speakers. That’s an extremely disappointing percentage of around 5%. The attendee count was also impressive, 1300, as reported by the organizers, but equivalently impressive was the lack of women among them.

     

    HadoopConGraph

    What got people talking, however, was the immense amounts of data reported by some of the participating companies: among others, Yahoo reported 600PetaBytes of data, a 43000 servers-cluster and 1 million Hadoop jobs per day, Pinterest talked about 40PetaBytes of data on Amazon S3 and a 2000-nodes Hadoop cluster, while Spotify reported having 13PetaBytes of data stored in Hadoop.

    As anticipated, the keynotes were very much focused on proving the business value of Hadoop and aiming at promoting products and services. Apart from some awkward role playing and the sales pitches, there was little technical value in most them, with the Yahoo keynote being an important exception.

    Streaming and real-time processing were certainly two of the hottest topics in the summit, covered by several talks each day. Real-time processing, i.e. processing data the moment it reaches your system and being able to immediately make decisions based on occurring events, is, without question, the next –if not current– big thing. I hope, though, that speakers get a bit more creative with their use-cases: out of the 5 streaming talks I attended, 4 used the “anomaly detection” use-case as their walk through example and motivation.

    The party place and theme perfectly fit the male-dominated audience. Held in an automotive museum, the famous Brussels Autoworld, a certainly spectacular place if you like cars… and motorbikes…and even more cars. Cars and motorbikes aside, we really enjoyed the food, even though vegetarian and vegan attendees felt probably neglected.

    Hadoop Summit was a successful event, judging by its numbers, technical content and overall organization. We look forward to seeing how the organizers will try to build on this success by improving, during the next events, speaker and attendee diversity and inclusivity.

     

    photo credits: Alex
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    Why solving the PNR dilemma could help find a new balance between privacy and security

    In the overflow of ongoing debates about security and privacy in Europe, the discussion over an EU wide passenger name record system (PNR) is most telling of Europeans’ data-schizophrenia. But interestingly enough, going beyond the PNR controversy, coul [read more]
    byGuillaume Xavier-Bender | 20/Apr/20156 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    In the overflow of ongoing debates about security and privacy in Europe, the discussion over an EU wide passenger name record system (PNR) is most telling of Europeans’ data-schizophrenia. But interestingly enough, going beyond the PNR controversy, could also propel Europe into a new era of security, prosperity and privacy.

    It’s always a strange feeling, and it happens more often than not within the EU. From the moment you walk into the airport to the moment you exit your arrival terminal, it may happen that no one has asked you at a single time during your journey to show an ID.

    You scanned your QR code to access the departure gates, you scanned it again to go through security and then to board, you flew and crossed one or multiple borders, you exited the plane, walked through baggage claim without stopping, and then customs, and you are in another country.No one ever asked you for an ID, and when flying, it is always a strange thought to have that the passenger sitting next to you might not be the person whose name is on the ticket.

    Reality, though, is much different. Passenger information collected by airlines is accessible to law enforcement authorities who can ‘pull’ names of suspected terrorists or criminals. But suspects need to already be on a list for the system to work impermeably.

    With rising concerns throughout Europe that terror attacks may be the act of EU citizens radicalized and trained abroad, many argue for a system that would allow drawing patterns and pre-empting the worse from happening.

    It was in this spirit that an EU-U.S. PNR agreement was adopted as soon as 2007, providing a framework for the transfer of EU air passengers’ personal data to the US authorities. With such a system, airlines ‘push’ passenger information to law enforcement authorities, who are then able to identify ‘unknown’ suspects.

    Put bluntly: after analysis of their data (travel dates, itinerary, baggage and payment information, etc.) passengers can become suspects; they get on the list.

    Understandably, for security purposes, such a tool can be very useful. It makes sense between allies, and the EU has signed such agreements with Canada and Australia too. It makes even more sense between EU countries; something that has yet to be achieved.

    But understandably as well, such tool raises a number of questions when it comes to the type of data collected, the authorities who will have access to it, the duration of retention of the data, and the risks such profiling could present to individual fundamental rights.

    In the aftermaths of the Paris terror attacks of January 2015, it made renewed sense in Europe to push for a single EU wide PNR. While member states can perfectly design their own systems – and some already have – such fragmentation defeats the purpose of ensuring the same high level of security and privacy to all EU citizens in all EU countries.

    In a resolution that put an end to a long standing stalemate between EU institutions, members of the European Parliament called in February for progress to be made by the Commission and member states on issues related to data retention and data protection, with a view to adopt an EU PNR legislation by the end of the year.

    Reform of data protection rules and adoption of a necessary and proportionate PNR system should therefore be done in parallel. Such conditionality is risky on many levels. But it could also give a much awaited boost to discussions in Europe on data retention and data privacy.

    In addition, finding the right balance in the EU PNR legislation could prove valuable for negotiations with partner countries. On April 1st, Mexico could have started imposing fines of up to $30,000 per flight to European airlines if it was not given sufficient guarantees that an EU-Mexico PNR agreement would soon see the light of day.

    The EU agreed to do so, and the deadline was pushed back to July 1st. Mexico’s muscle flex is not surprising in the current global security context. Other countries, such as Japan and Korea, have asked for the same in past, and could potentially resort to the Mexican precedent.

    As it now stands, the debate over PNR in Europe reaches far beyond the borders of the EU. It reaches to the core fundamentals of both individuals and nations: liberty and security. “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety”, cautioned Benjamin Franklin.

    Perhaps, but when it comes to personal information, the assumption of an inevitable trade-off between security and privacy should not be the norm.

    The mere idea of such a trade-off makes it virtually impossible for policymakers or citizens to move forward constructively.The EU should on the contrary work towards increasing collective security by raising the standards in the protection of privacy. And the parallel discussions over PNR provide an ideal framework to do so before the end of the year.

    We still may not know who is really sitting next to us on the plane after that, but someone definitely will.

     

    photo credits: Jessica Keating Photography
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    Safe Harbour and mass surveillance in the US: Two separate challenges

    The Safe Harbour agreement is not the appropriate instrument to solve transatlantic tensions over government mass surveillance in the US. The issue should be addressed separately from the US-EU commercial agreement regulating data transfer, whose suspensi [read more]
    byJohn Higgins | 15/Apr/20155 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    The Safe Harbour agreement is not the appropriate instrument to solve transatlantic tensions over government mass surveillance in the US. The issue should be addressed separately from the US-EU commercial agreement regulating data transfer, whose suspension would leave companies in the middle of a jurisdictional conflict they cannot themselves resolve.

    The EU-US Safe Harbour agreement has been the subject of a great deal of interest in recent weeks. At the end of March the head of the Article 29 Working Party, which represents Europe’s data protection authorities, raised the subject in the context of mass surveillance of private data by US security agencies in front of the European Parliament’s Civil Liberties (LIBE) Committee.

    At the same time Justice Commissioner Vera Jourova announced that she intends to conclude a revision of safe harbor with her US counterparts at the end of May. The debate is set to intensify this month as negotiators count down to the self-imposed deadline for revising the 14-year old bilateral agreement.

    Amid all this attention it is worth pointing out a few things about Safe Harbour that have been overlooked in much of the media coverage of the subject, and to explain why it is so important to revise rather than suspend the mechanism.

    The EU-US Safe Harbour agreement facilitates transatlantic transfers of commercial data by European and US companies of all sizes. It is a vital tool for a wide range of industries engaged in the trade in goods and services between the EU and the US.

    The agreement needs to be refreshed and we support the efforts of the European Commission to improve it. We are confident that the reform of Safe Harbour can be achieved through political discussions between the two trading partners.

    While respecting citizens’ right to privacy, we believe an improved Safe Harbour agreement must continue to facilitate data transfers conducted by law-abiding companies.

    Any suspension of Safe Harbour would affect American and European companies alike, and it would be especially burdensome for small and medium size enterprises that use the mechanism for data transfers to the US.

    A suspension would clog up perfectly legitimate, non-controversial, safe flows of non-personal as well as personal data, and it would therefore have significant economic consequences for the US and the EU.

    Similarly, if national data protection authorities were empowered to override EU level agreements such as Safe Harbour, as suggested by some national data protection authorities last month during a hearing at the Court of Justice of the EU (CJEU), this would lead to the splintering of EU rules on international data transfers.

    This in turn would undermine efforts to create a digital single market, and instead create even more fragmentation and legal uncertainty within the EU than there is today. At the heart of the case being heard in court last month is the issue of protection of a citizen’s private data from US security agency surveillance.

    The tech industry in the US has joined forces with privacy groups in opposing efforts to extend bulk surveillance by US security agencies. In Europe we have been criticised by European security agencies for placing too high a priority on citizens’ privacy.

    DIGITALEUROPE shares the concerns of the public and opposes the bulk collection of citizen’s data by state security agencies. However, the Safe Harbour agreement is not the appropriate instrument to solve this problem. Isabelle Falque-Pierrotin, Chair of the Article 29 Working Party said as much at a meeting with the European Parliament‘s LIBE Committee at the end of March.

    Attempting to solve the problem through the revision of Safe Harbour would only deflect attention from the real discussions that need to occur.

    It requires direct government-to-government negotiations on the norms in cyber surveillance and access by authorities. It cannot be resolved in a commercial agreement, which would leave companies in the middle of a jurisdictional conflict they cannot themselves resolve.

    We urge the European Commission, which leads the European negotiating team, to treat this task separately from the revision of rules to allow for the transfer of commercial data from Europe to the US. For more information please read our position paper on the Safe Harbour revision.

     

    photo credits: Linda Tanner
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    Cognitive Computing: Benefits and Challenges of the Next Tech Revolution

    Cognitive Computing is going to transform and improve our lives. But it also presents challenges that we need to be conscious of in order to make the best use of this technology. Co-authored by: Mateo Valero and Jordi Torres Big Data technology allows [read more]
    byBarcelonaTech UPC | 13/Apr/20157 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    Cognitive Computing is going to transform and improve our lives. But it also presents challenges that we need to be conscious of in order to make the best use of this technology.

    Co-authored by: Mateo Valero and Jordi Torres

    Big Data technology allows companies to gain the edge over their business competitors and, in many ways, to increase customer benefits. For customers, the influences of big data are far reaching, but the technology is often so subtle that consumers have no idea that big data is actually helping make their lives easier.

    For instance, in the online shopping arena, Amazon’s recommendation engine uses big data and its database of around 250 million customers to suggest products by looking at previous purchases, what other people looking at similar things have purchased, and other variables.

    They are also developing a new technology which predicts what items you might want based on the factors mentioned above and sends it to your nearest delivery hub, meaning faster deliveries for us.

    To do so, they are using predictive models, a collection of mathematical and programming techniques used to determine the probability of future events, analyzing historic and current data to create a model to predict future outcomes.

    Today, predictive models form the basis of many of the things that we do online: search engines, computer translation, voice recognition systems, etc. Thanks to the advent of Big Data these models can be improved, or “trained”, by exposing them to large data sets that were previously unavailable.

    And it is for this reason that we are now at a turning point in the history of computing. Throughout its short history, computing has undergone a number of profound changes with different computing waves.

    In its first wave, computing made numbers computable.

    The second wave has made text and rich media computable and digitally accessible. Nowadays, we are experiencing the next wave that will also make context computable with systems that embed predictive capabilities, providing the right functionality and content at the right time, for the right application, by continuously learning about them and predicting what they will need.

    For example identify and extract context features such as hour, location, task, history or profile to present an information set that is appropriate for a person at a specific time and place.

    The general idea is that instead of instructing a computer what to do, we are going to simply throw data at the problem and tell the computer to figure it out itself.

    We changed the nature of the problem from one in which we tried to explain to the computer how to drive, to one in which we say, “Here’s a lot of data, figure out how to drive yourself”. For this purpose the computer software takes functions from the brain like: inference, prediction, correlation, abstraction, … giving to the systems to possibility to do this by themselves. And here it comes the use of cognitive word to describe this new computing.

    These reasoning capabilities, data complexity, and time to value expectations are driving the need for a new class of supercomputer systems such as those investigated in our research group in Barcelona.

    It is required a continuous development of supercomputing systems enabling the convergence of advanced analytic algorithms and big data technologies driving new insights based on the massive amounts of available data. We will use the term “Cognitive Computing” (others use Smart Computing, Intelligent Computing, etc.) to label this new type of computing research.

    We can find different examples of the strides made by cognitive computing in industry. The accuracy of Google’s voice recognition technology, for instance, improved from 84 percent in 2012 to 98 percent less than two years later. DeepFace technology from Facebook can now recognize faces with 97 percent accuracy.

    IBM was able to double the precision of Watson’s answers in the few years leading up to its famous victory in the quiz show Jeopardy. This is a very active scenario.

    From 2011 through to May 2014, over 100 companies in the area merged or were acquired. During this same period, over $2 billion dollars in venture capital funds have been given to companies building cognitive computing products and services.

    Cognitive Computing will improve our lives. Healthcare organizations are using predictive modeling to assist diagnosing patients and identifying risks associated with care. Or farmers are using predictive modeling to manage and protect crops from planting through harvest.

    But there are problems that we need to be conscious of. The first, is the idea that we may be controlled by algorithms that are likely to predict what we are about to do.

    Privacy was the central challenge in the second wave era. In the next wave of Cognitive Computing, the challenge will be safeguarding free will. After Snowden revelations we realize it’s easy to abuse access to data.

    There is another problem. Cognitive Computing is going to challenge white collar, professional knowledge work in the 21st century in the same way that factory automation and the assembly line challenged blue collar labor in the 20th century.

    For instance, one of the Narrative Science’s co-founder estimates that 90 percent of news could be algorithmically generated by the mid-2020s, much of it without human intervention. Or researchers at Oxford published a study estimating that 47 percent of total US employment is “at risk” due to the automation of cognitive tasks.

    Cognitive Computing is going to transform how we live, how we work and how we think, and that’s why Cognitive Computing will be a big deal. Cognitive computing is a powerful tool, but a tool nevertheless – and the humans wielding the tool must decide how to best use it.

     

    photo credits: Robert Course-Baker
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    Towards a widespread adoption of telemedicine and mHealth

    Telemedicine and mHealth are an opportunity for citizens and a driver of great economic impact, although the market is not yet developed on a large scale. The European Commission has been funding several projects and is developing a number of initiatives [read more]
    byTerje Peetso | 13/Mar/201513 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    Telemedicine and mHealth are an opportunity for citizens and a driver of great economic impact, although the market is not yet developed on a large scale. The European Commission has been funding several projects and is developing a number of initiatives to foster a massive utilization of these tools by 2020.

    Mobile technologies are spreading. According to the Gartner Hype Cycle 2014 [1] mHealth monitoring is currently in the trough of disillusionment and the plateau of productivity can be expected in 5 – 10 years.

    Telemedicine was not specifically highlighted in the Cycle, but some aspects of it such as wearable users interface and data science, were in different stages. At the same time, the Ericsson Mobility Report of November 2014 [2] predicts that 90% of people aged six years and over will have mobile phones by 2020, when the number of smartphone subscriptions is set to reach 6.1 billion.

    This trend is reflected on mHealth use in Europe and the world. Every day, different websites and journals publish scientific articles on the use of mobile devices for preventing, diagnosing, treating or monitoring a disease.

     

    State of play – Globally…

    According to a World Health Organisation 2011 survey on mHealth [3], there was already at the time at least one mHealth initiative in 100 of the 112 Member States surveyed; in three quarters four or more types of mHealth initiatives were reported.

    The types of mHealth initiatives most frequently reported globally were health call centres/health-care telephone help lines (59%), emergency toll-free telephone services (55%), emergencies (54%), and mobile telemedicine (49%).

    According to another WHO survey on telemedicine [4], teleradiology is currently the most developed telemedicine service area globally, with just over 60% of 114 responding countries offering some form of service, and over 30% of respondents having an established service.

    Other more widely used services are teledermatology, telepathology, and telepsychiatry. Services were listed in 16 different healthcare areas – cardiology, mammography, surgery, ophthalmology, diabetes management, paediatrics, stroke treatment, urology, otorynology etc.

    … and in Europe

    Technological solutions and research data already exist, patients are curious in using new, effective methods for different steps in managing their health and disease, doctors actively participate in research or introduce their own solutions for providing better service. More importantly, EU Member States have realized the potential of telemedicine and are supportive of its beneficial deployment.

    Some Member States have already adopted legislation on telemedicine or have started discussions on making it a regular healthcare service. However, today telemedicine and mobile health are still not part of mainstream healthcare in Europe.

    The European Hospital Survey: Benchmarking deployment of eHealth services (2012–2013) [5] and the survey among General Practitioners (2013) [6] demonstrated that only 9% of hospitals offer patients the opportunity to be remotely monitored and fewer than 10% of general practitioners conduct online consultations with patients and fewer than 16% with other medical specialists online.

    From these surveys we also see that 39% of general practitioners are able electronically to exchange patient medical data with other health professionals/organizations.

    However, regular exchange occurs mainly for simple features, such as laboratory reports, referral and discharge letters, sick leave and disability certification.

    When it comes to hospitals, 48% share some medical information with external general practitioners electronically and 70% of EU hospitals with external care providers. At the same time, less than 8% of EU hospitals share medical information electronically with healthcare providers located in other EU countries.

    A 2014 European eHealth Stakeholder Group report [7] assesses telemedicine services in the EU from the user’s and the stakeholder’s perspective and offers advice on how to make telemedicine services available to all Europeans by 2020 at the latest.

    According to the report, tele-radiology has reached the status of routine use but is well ahead of other fields of telemedicine: while tele-radiology is currently used by 65 % of 368 radiology professionals who participated in the recent survey of the European Society of Radiology, tele-neurology (Telestroke) is in regular practice only in some regions in Europe for example in Catalonia and Scotland, tele-dermatology is in use in Scotland, but relies much on a limited number of enthusiasts and is not yet systematically organised on a National level.

    Only few examples are available about the regular use of telemedicine for managing diabetes (Denmark, Finland) or Chronic Heart Failure (Germany).

     

    Working towards legal clarity

    In 2012, the Commission tried to assess the existing EU legal framework applying on telemedicine services in a dedicated Staff Working Document on Telemedicine[8], published along with the eHealth Action Plan 2012-2020.

    The document covered legal questions such as licensing, data protection, reimbursement, and liability as regards the telemedicine in cross-border healthcare in the EU.

    Moreover, in April 2014, the European Commission launched a public consultation[9] to identify the right way forward to unlock the potential of mobile health in the EU. The consultation demonstrated that strong privacy and security tools (such as data encryption and authentication mechanisms) are needed to build users’ trust.

    Half of the respondents called for a strengthened enforcement of data protection and the rules applicable to mHealth devices and nearly half of the respondents ask for more patient safety and transparency of information, by means of certification schemes or quality labeling of lifestyle and wellbeing apps.

    The European Commission is currently analyzing options for addressing the issues highlighted by the public consultation. This is also in line with the Commission overall strategy on establishing Digital Single Market.

     

    Addressing the need for evidence

    Many countries participating in the WHO survey on telemedicine [10] reported high costs as a barrier to the implementation of telemedicine solutions. One possible reason for this is that telemedicine has not yet proven its value in cost-effectiveness and better quality compared to traditional services.

    The European Commission through its Framework Programs on Research and Innovation has founded several projects [11] that address either telemedicine and mobile health.

    Some of these projects are setting up more general principles for implementation, e.g. the project “Momentum” prepared a European telemedicine “Blueprint” to mainstream telemedicine into daily practice and make it sustainable [12] and the project “MovingLife” (“MObile eHealth for the VINdication of Global LIFEstyle change and disease management solutions”) has delivered a set of mHealth roadmaps, which should accelerate the establishment, acceptance and wide use of mHealth solutions globally [13].

    Other projects are looking into more specific areas of healthcare that could benefit from the use of telemedicine services both in terms of better health outcomes as well as being cost-effective.

    An examples could be here the CommonWell project[14] that delivered integrated telecare and telehealth services among social care providers and hospitals on open platforms.

    The developed services were targeted mainly for patients suffering from chronic diseases and professionals dealing with these conditions. The project ended in early 2012 and integrated services are now in real-life operation at the four pilot sites established in Spain, Germany, England and the Netherlands.

    The project Renewing Health [15] implemented large scale real-life pilots to validate and evaluate innovative patient-centred personal health systems and telemedicine services for people suffering from Chronic Obstructive Pulmonary Diseases, diabetes and cardiovascular diseases.

    The United4Health project [16] aims to exploit and further deploy innovative telemedicine services implemented and trialled under the RenewingHealth project. UNWIRED Health [17] deals with mHealth procurement for the transformation of healthcare services.

    In this case, the Pre-Commercial Procurement (PCP) focuses on apps offering services to improve vaccination coverage and adherence and to coach patients with heart failures enabling education, motivation, remote monitoring and other functionalities, integrating and coordinating care provided by a hospital and the primary care physician.

    The European Innovation Partnership on Active and Healthy Ageing (EIP AHA) [18] has formed a community across many EU regions of a critical mass of over 3000 stakeholders with considerable expertise in innovative digital solutions (including mHealth and telemedicine) for citizens, care systems and industry.In six Action Groups they tackle health and ageing related challenges at European scale, involving approximately two million patients. The EIP AHA Reference Sites and other regions have already deployed a range of innovative practices which are available for sharing, transferring to other regions and scaling up.

     

    Global activities to learn from

    Small-scale projects that look into the effectiveness of both telemedicine and mobile health are carried out all over the world.

    For example, a study conducted by Irvine et al. on the use of mobile-web app to self-manage low back pain demonstrated that a theoretically based stand-alone mobile-Web intervention that tailors content to users’ preferences and interests can be an effective tool in self-management of low back pain [19].

    Furthermore, new online communities have been set up for collecting and sharing information and learning from each other. The first well-known site was PatientsLikeMe.com [20] which motto is “Making healthcare better for everyone through sharing, support and research”.

    The site has now more than 300.000 members, looks into more than 300 conditions and has published more than 50 research studies based on data available to them.

     Let’s not forget about another exiting area of using mobile health – gamification. BrainGames from Anti-Aging Games [21] are based on 17.000 published scientific articles based on which brain stimulation games, relaxation games, best brain tips and games for stroke recovery were developed. Each of the games is linked to the PubMed study.

    Some technological solutions may become beneficial rather quickly for example in organizing more effectively the use of operating theaters or contributing to patient movement within a hospital so that it would cause a little discomfort as possible.

    Global Lab for Health [22] is a good example of collecting, sharing as well as analysing information about innovative approaches in healthcare, also those that support organizational changes.

    Any innovator can submit on this Internet platform their innovative ideas giving also information about the existing users of the tool. Small group in the University of Southern California will then contact reference customers for their feedback which contains also information about savings.

    Based on all gathered information a report about the efficiency of the innovation will be made.

    Conculsions

    “75% of what he does today he never learned in residency”, says Dr Atul Gawande in his book “Complications” while talking about his father, also a doctor.In the same book, Dr Gawande also says: “To fail adopting new techniques would mean denying patients meaningful medical advances.”

    I believe the latter applies both to e.g. a new technology/ technique used in surgery and to home monitoring of patients with chronic heart failure.

    The above-mentioned survey among general practitioners of 2013 indicated that main barriers for using eHealth are:

    (1) lack of remuneration for additional work answering patients’ emails (79%);

    (2) lack of sufficient IT training for healthcare professionals (75%);

    (3) lack of interoperability of IT systems (73%);

    (4) lack of sufficient IT skills on the side of healthcare professionals (72%);

    (5) lack of a regulatory framework on confidentiality and privacy for email doctor-patient communication (71%). None of these barriers is impassable and are also addressed in the European Commission eHealth Action Plan 2012-2020 [23].

    Tele-monitoring of a patient does not mean a doctor should keep an eye on all data 24/7; nor does it mean that all face-to-face patient-doctor meetings will be replaced by the exchange of information using e-mails, sms and Skype sessions. Telemedicine and mHealth are tools that may help doctors react to the worsening of a condition (smart systems call attention only to changes that need intervention) in an early stage and thus help avoid complications and unnecessary hospitalization.

    In addition, regular daily collection of data gives much better overview of a condition than a single review of a patient’s condition during a face-to-face meeting once month (or less). A patient’s active participation in managing their long-term condition supports patient empowerment. This is a fundamental principle for healthcare of 21st century.

     

    [1] http://www.gartner.com/newsroom/id/2819918
    [2] http://www.ericsson.com/res/docs/2014/ericsson-mobility-report-november-2014.pdf
    [3] http://www.who.int/goe/publications/ehealth_series_vol3/en/
    [4] http://whqlibdoc.who.int/publications/2010/9789241564144_eng.pdf?ua=1
    [5] http://ec.europa.eu/digital-agenda/en/news/european-hospital-survey-benchmarking-deployment-ehealth-services-2012-2013
    [6] https://ec.europa.eu/digital-agenda/en/news/benchmarking-deployment-ehealth-among-general-practitioners-2013-smart-20110033
    [7] http://ec.europa.eu/digital-agenda/en/news/commission-publishes-four-reports-ehealth-stakeholder-group
    [8] https://ec.europa.eu/digital-agenda/en/eu-policy-ehealth
    [9] https://ec.europa.eu/digital-agenda/en/mhealth
    [10] http://whqlibdoc.who.int/publications/2010/9789241564144_eng.pdf?ua=1
    [11] https://ec.europa.eu/digital-agenda/en/news/ehealth-projects-research-and-innovation-field-ict-health-and-wellbeing-overview
    [12] www.telemedicine-momentum.eu
    [13] www.moving-life.eu
    [14] commonwell.eu
    [15] www.renewinghealth.eu
    [16] www.united4health.eu
    [17] http://www.unwiredhealth.eu
    [18] http://ec.europa.eu/research/innovation-union/index_en.cfm?section=active-healthy-ageing
    [19] Irvine AB, Russell H, Manocchia M, Mino DE, Cox Glassen T, Morgan R, Gau JM, Birney AJ, Ary DV Mobile-Web App to Self-Manage Low Back Pain: Randomized Controlled Trial. J Med Internet Res 2015;17(1):e1. DOI
    [20] http://www.patientslikeme.com
    [21] http://anti-aginggames.com/
    [22] http://uclainnovates.org/intake-form
    [23] https://ec.europa.eu/digital-agenda/en/eu-policy-ehealth

     

    photo credits: African Nutrition Matters
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    The SEO benefits of green hosting

    The true weak spot of hosting companies is the lack of referrals from quality websites, which often translates into a low SEO-value.  For green Hosters however, there might be more good news on the horizon. The hosting market is very crowded, with many [read more]
    byRené Post | 23/Jan/20153 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    The true weak spot of hosting companies is the lack of referrals from quality websites, which often translates into a low SEO-value.  For green Hosters however, there might be more good news on the horizon.

    The hosting market is very crowded, with many companies fighting for attention from potential customers with something that is – let’s face it – more or less a generic product. This means companies have to turn to advertising, sponsoring, or to that most dangerous of strategies: comparison sites, where one bad day in your data-center can ruin your ranking for years to come.

    Recently I stumbled upon the true weak spot of hosting companies: the lack of referrals from quality websites. You can have a million customers, but if no one mentions you on those million websites, in SEO-terms you could still be losing from your neighbor who is hosting dinner-parties.

    In the hosting-sector, this leads to a kind of black hole around hosters. Try for instance a Google-search for ‘hosted by One.com‘, ‘websites hosted by Strato’, or for ‘Dreamhost.com‘. All three companies have over a million hosted domains, but hardly any actual customer recommendations for their hosting services

    What grabbed my attention, was the fact that The Green Web Foundation’s http://www.thegreenwebfoundation.org website link appears on the first or second page, while we are relatively still a very small project with around 2000 visitors per month. There are countless referrals from parked sites to large hosters, but since these referring sites have no content and no visitors, their SEO-value is close to zero.

    There is another problem: even if you are mentioned and linked to as a supplier from normal functioning sites, without some relevant content match between that site and the hosting business, the value of the backlink is probably limited. Relevance matters, so if you are a green host, a free listing in The Green Web Foundation’s online directory will be worth its weight to the search engines, so it’s a good link to have.

    [Tweet “For green hosters however, there might be more good news on the horizon.”]

    While it is true that there is in general no content match between, for instance Beverly Hills Church Pre-school and Dreamhost.com, this all changes when the pre-school site includes a page on their website about their sustainability efforts. As their choice to host withDreamhost.com was part of their sustainability-policy, and their policy states that clearly, adding a Green Host Dreamhost badge backlinked to Dreamhost’s sustainability-section will give them both extra points with the search engines.

    In SEO-terms, green hosting and sustainability are on-topic backlinks, highlighting the sustainability endeavors of both companies, and making it much easier for the hosting company to stand out – without paying for it by Adwords, sponsorship or paid links.

    photo credits: jacsonquerubin
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    Making Europe fit for the ‘Big Data’ economy

    The European Commission has taken an important first step in outlining possible elements of an EU action plan on Big Data. It is now essential to get the policy framework right. The faster the better. A second wave of digital transformation is coming. [read more]
    byJohn Higgins | 12/Jan/20155 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    The European Commission has taken an important first step in outlining possible elements of an EU action plan on Big Data. It is now essential to get the policy framework right. The faster the better.

    A second wave of digital transformation is coming.

    The first one revolutionized the way we order information and spans technological advances from the advent of the mainframe computer to the arrival of Internet search.

    [Tweet “This second wave will reinvent how we make things and solve problems.”]

    Broadly it can be summed up in two words: Big Data. The expression ‘Big Data’ is used to describe the ability to collect very large quantities of data from a growing number of devices connected through the Internet.

    Thanks to vast storage capacity and easy access to supercomputing power – both often provided in the cloud – and rapid progress in analytical capabilities, massive datasets can be stored, combined and analysed. In the next five years Big Data will help make breakthroughs in medical research in the fight against terminal illnesses. Per capita energy consumption will decline sharply thanks to smart metering another application of Big Data.

    Traffic jams will be rarer, managing extreme weather conditions will become more science, less guesswork. Makers of consumer goods of all kinds will be able to reduce waste by tailoring production to actual demand. This new ‘data economy’ will be fertile ground that will allow many new European SMEs to flourish.

    Broad adoption of such Big Data applications can only happen if the data is allowed to flow freely, and if it can be gathered, shared and analysed by trusted sources. Size definitely does matter. The bigger the dataset, the more insights we can glean from it, so it’s important that the data can flow as widely as possible.

    [Tweet “Some elements of Big Data might involve personal data.”]

    People need to be confident these are protected by laws and agreements (such as safe harbour). All actors in the data economy must work hard to ensure that data is as secure as possible against theft and fraud.

    The European Commission has taken an important first step in outlining possible elements of an EU action plan for advancing towards the data-driven economy and addressing Europe’s future societal challenges.

    To complement this initiative DIGITALEUROPE has drafted a paper outlining what we see as the policy focus in relation to Big Data.

    We have identified eight priorities:

    – Adopt a harmonised, risk-based and modern EU framework for personal data protection that creates trust while at the same time enabling societally beneficial innovations in the data economy

    – Encourage the protection of Big Data applications from cyber attacks, focusing regulatory efforts on truly critical infrastructures

    – Support the development of global, voluntary, market-driven and technology-neutral standards to ensure interoperability of datasets

    – Clarify the application of EU copyright rules so to facilitate text and data mining

    – Boost the deployment of Open Data by transposing the Public Sector Information Directive into national law by June 2015 at the latest (EU Member States)

    – Create trust in cross-border data flows by supporting the implementation of the Trusted Cloud Europe recommendations

    – Continue addressing the data skills gap by supporting initiatives like the Grand Coalition for Digital Jobs

    – Continue encouraging private investment in broadband infrastructure and HPC technologies with public funding DIGITALEUROPE is ready to engage constructively with the European Commission, Parliament and Council to help them formulate a European action plan for the data economy

    It is essential to get this policy framework right, but it is also important to move fast. While Europe is preparing the ground for widespread adoption of the new digital age, the rest of the world is not standing still.

     

    photo credit: data.path Ryoji.Ikeda
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    ‘Green’ data centers: easier & cheaper than you think

    Data centers are formidable energy suckers, accounting for a large share of Internet energy consumption. Worse, they rely heavily on fossil power. Yet slashing their carbon footprint is far from inconceivable. Cloud computing technologies may help. In th [read more]
    byRené Post | 17/Dec/20145 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    Data centers are formidable energy suckers, accounting for a large share of Internet energy consumption. Worse, they rely heavily on fossil power. Yet slashing their carbon footprint is far from inconceivable. Cloud computing technologies may help.

    In this age of ultra-fast fiber, we still need to generate the electrons that will keep the photons flowing with the speed of light across the globe. Estimates show that worldwide around 10% of all electricity generated now is consumed within the digital domain, with a new iPhone consuming more energy than the average refrigerator.

    Datacenters account for around 3%, so the location of datacenters is very much interlinked with the availability of cheap electrical power – naturally or subsidized.

    The energy generation landscape in Europe however is changing quickly. Fossil fuel powered electricity plants in countries like Danmark and Germany are being phased out, and the effects are not limited to countries that embrace the ‘Energiewende’ but ripples across borders, disrupting existing energy markets in ‘slower’ countries like the Netherlands and Belgium.

    But although a new era is clearly looming, Newton’s second law still applies. So a coal power plant in the Netherlands that gets pushed out of business by German wind-power, does not simply close down but draws new plans to attract the aluminium melters of this age: datacenters.

    And why not – isn’t that how we modernize the economy? On paper the marriage between old-style power plants and datacenters looks ideal: coal & nuclear power-plants need baseload-clients, and here they are.

    Compare that with the perceived unpredictability of solar and wind-power, and it seems inevitable that (non-hydro producing) countries that want to play a role in this digital age, are stuck with fossil power generation for a long time to come…

    But is that really so? The underlying assumption that I want to challenge here, is that data-centers can indeed be seen as old-style factories that need a stable energy consumption, but that is not a necessity that is dictated from a technological point of view.

    Let us turn back shortly to the electrons & photons: the economics of electricity transport over longer distances might be terrible, but the economics of transporting data looks way much better.

    Facebook operates an 27.000 m2 data-center near Luleå in North-Sweden – where there are more moose than people and that is several thousands of kilometers away from the larger city centres in Europe where the users are. But that kind of distance is not very relevant on the Internet, since Berlin, Paris and London are just a few miliseconds away.

    But by the same reasoning large solar powered plants in North-Africa, could just as easily provide the European (and African, and Middle-Eastern) Internet with all the photons they need.

    Or what about the North-sea, on a windy winternight – groups of windmills with data-centers directlty built into it? No powerlines to shore, only a few fibers?

    [Tweet “This would very quickly lead to an Internet that would be powered mainly by renewables”]

    The only requisite would be, that these datacenters could share their data, so that they can all deliver the same data in principle, but depended on where the electricity is the most abundant at that moment, that datacenter will take over the main load.

    Oh, and that principle about sharing the data across multiple locations? It might sound high-tech, but actually it is invented a while ago, it is called ‘The Cloud’.

    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    Transatlantic relations: A matter of trust

    How big is the divide between the United States and Europe when we talk about data protection and cybersecurity? And what is at the basis of the current differences between the two regional players? Is it just Snowden and the NSA, or is it a deeper issue? [read more]
    byIgnasi Guardans | 16/Dec/20145 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    How big is the divide between the United States and Europe when we talk about data protection and cybersecurity? And what is at the basis of the current differences between the two regional players? Is it just Snowden and the NSA, or is it a deeper issue?

    I had the privilege to contribute to the European perspective among a large group of experts attending an interesting exchange about this in Washington DC. What was supposed to be a conference on cybersecurity policy and regulation became an exchange on privacy and data.

    It is difficult for the EU and the US to work together in the field of online and data security as long as we have those other open quarrels on privacy protection and the rules applicable to transatlantic transfer of data.

    I was a bit surprised to realize that the whole data protection regulatory approach in the EU is observed with a mix of admiration and respect by US experts. And -interesting enough-, it is our model which is in the way of becoming a world standard among democracies.

    Boit systems are clearly different: we have a structured piece of legislation on privacy in Europe (currently in full revision), with clear definitions of privacy related data, and clear rules about the rights and obligations related to the use of those data; all with clear authorities responsible of enforcing those measures.

    In the US, the legal protection of online privacy results of a diversity of legal instruments, enforced by different agencies and authorities. This is combined with the importance given to self regulation by companies.

    But is the demand for online privacy by citizens that different in Europe and the US? No, as a matter of fact it isn’t. Americans do want their privacy protected as well.

    What is radically distinct among “us” and among “them” can be reduced to a word: trust.

    [Tweet “Europeans do not trust their government’s management of data and will not give them a blank check”]

    The Stasi, Ceaucescu, and many other personal experiences of authoritarian invasion of private life have taken their toll in the public perception of the risks of abuse of personal data. The US public does not have that memory. It is not a perceived risk. Definitely not in the mainstream public opinion.

    And what about private companies? What part of Europeans’ mistrust against Facebook or Google is addressed to those two companies and their privacy policies, or is addressed to their potential collaboration with the US Government? Difficult to say.

    But what appears to be clear is that a huge amount of Europeans request from their public authorities, including their European legislators and the European Commission, to assume an active role in protecting them from this external potential intrusion.

    An intrusion which, in the public perception, comes from the United States: in part from its Government, in part from its huge corporations which control the largest part of our online digital life.

    If this is true, then the current transatlantic difficulties regarding online privacy require a social approach, must deal with this citizen’s mistrust, and are not just a matter of technical negotiation between experts or burocrats on both sides of the Pond.

    The matter will only be solved if this public trust is reinforced. US companies have a lot to lose if the transatlantic flow of private data is halted; if the “safe harbour” scheme -which currently regulates in which cases private data originated in Europe can be transferred to and filed in the US- is interrupted.

    But we know well that this scheme is not working, and the threat to annul it is real (and it may be the Court of Justice who annuls it in the fist place). The Commission -under huge pressure from the parliament- is negotiating this with the US.

    But this is not just a legalistic issue to be solved like a trade negotiation of battery standards. This is a problem with deep social roots. The sooner American decision makers -in Congress, in Government-, understand this; the more possible it will be to rebuild the indispensable trust on the part of Europeans.

    And only with that trust in place we will be able to work together, US and EU, in the search of common answers to the essential common threats to our online and digital security.

    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    IoT: The great opportunity for Telcos

    By 2020, we are expected to have 50 billion connected devices. Will European Telecoms firms monetize the explosive growth of Internet of Things? The next five years will be critical. In the long term, much may depend on the development of 5G technology. [read more]
    byAjit Jaokar | 15/Dec/20144 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    By 2020, we are expected to have 50 billion connected devices. Will European Telecoms firms monetize the explosive growth of Internet of Things? The next five years will be critical. In the long term, much may depend on the development of 5G technology.

    Like many memes which originate in the web domain (for example Web 2.0), Big Data has an impact on the Telecoms industry. However, unlike Web 2.0 (which is mostly based on the advertising business model), Big Data has wider implications for many domains (for example healthcare, transportation etc).

    The term Big Data is now (2014) quite mature. But its impact is yet to be felt across many verticals over the next few years. While Telecoms is also a vertical, it is also an enabler of value for many industries. Hence, there are many areas where Telecoms will interplay with Big Data.

    Based on my teaching at Oxford University and the City Sciences program at UPM – Technical University of Madrid – Universidad Politécnica de Madrid, I propose that the value of Big Data for Telecoms lies in IoT (Internet of Things)

    IoT is huge, but how huge?

    [Tweet “By 2020, we are expected to have 50 billion connected devices”]

    To put in context: The first commercial citywide cellular network was launched in Japan by NTT in 1979. The milestone of 1 billion mobile phone connections was reached in 2002. The 2 billion mobile phone connections milestone was reached in 2005. The 3 billion mobile phone connections milestone was reached in 2007. The 4 billion mobile phone connections milestone was reached in February 2009.

    So, 50 billion by 2020 is a massive number, and no one doubts that number any more. But IoT is really all about Data and that makes it very interesting for the Telcos. Data is important, but increasingly it is also freely available.

    Customers are willing to share data. Cities are adopting Open Data initiatives. Big Data itself is based on the increasing availability of Data. IoT is expected to add a huge amount of data too.

    But, who will benefit from it and how?

    There is a phrase variously attributed to Oil Magnate J Paul Getty – ‘The meek shall inherit the earth, but not its mining rights’. In other words, Data will be free, available and Open, but someone will make money out of it. No doubt, the web players and various start-ups will all monetize this data. But how will Telecoms?

    [Tweet “Looking at the business case for Big Data and IoT, the next five years are critical for Telecoms.”]

    Here’s why. IoT connectivity will come in two forms: Local area connectivity and Wide area connectivity. Bluetooth 4.0 and iBeacon will provide the local area connectivity. We can expect that from 2015 onwards – most devices retailers will support Bluetooth 4.0.

    But the wide area connectivity will still need 5G deployment, which is also the most logical candidate for wide area IoT connectivity. And therein lies the value and business case for Big Data for Telecoms: 5G will be needed to connect the ‘IoT islands’ over the next years.

    Will Telecoms monetize IoT ?

    Time will tell. Specifically the next five years since most analysts predict that 5G deployments will take place in 2020 and beyond.

    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    Cognitive Computing: Benefits and challenges of the next tech revolution

    Co-authored with Mateo Valero Cognitive Computing is going to transform and improve our lives. But it also presents challenges that we need to be conscious of in order to make the best use of this technology. Big Data technology allows companies to gai [read more]
    byJordi Torres | 13/Apr/20145 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    Co-authored with Mateo Valero

    Cognitive Computing is going to transform and improve our lives. But it also presents challenges that we need to be conscious of in order to make the best use of this technology.

    Big Data technology allows companies to gain the edge over their business competitors and, in many ways, to increase customer benefits. For customers, the influences of big data are far reaching, but the technology is often so subtle that consumers have no idea that big data is actually helping make their lives easier.

    For instance, in the online shopping arena, Amazon’s recommendation engine uses big data and its database of around 250 million customers to suggest products by looking at previous purchases, what other people looking at similar things have purchased, and other variables.

    They are also developing a new technology which predicts what items you might want based on the factors mentioned above and sends it to your nearest delivery hub, meaning faster deliveries for us.

    To do so, they are using predictive models, a collection of mathematical and programming techniques used to determine the probability of future events, analyzing historic and current data to create a model to predict future outcomes.

    Today, predictive models form the basis of many of the things that we do online: search engines, computer translation, voice recognition systems, etc. Thanks to the advent of Big Data these models can be improved, or “trained”, by exposing them to large data sets that were previously unavailable.

    And it is for this reason that we are now at a turning point in the history of computing. Throughout its short history, computing has undergone a number of profound changes with different computing waves.

    In its first wave, computing made numbers computable.

    The second wave has made text and rich media computable and digitally accessible. Nowadays, we are experiencing the next wave that will also make context computable with systems that embed predictive capabilities, providing the right functionality and content at the right time, for the right application, by continuously learning about them and predicting what they will need.

    For example identify and extract context features such as hour, location, task, history or profile to present an information set that is appropriate for a person at a specific time and place.

    The general idea is that instead of instructing a computer what to do, we are going to simply throw data at the problem and tell the computer to figure it out itself.

    We changed the nature of the problem from one in which we tried to explain to the computer how to drive, to one in which we say, “Here’s a lot of data, figure out how to drive yourself”. For this purpose the computer software takes functions from the brain like: inference, prediction, correlation, abstraction, … giving to the systems to possibility to do this by themselves. And here it comes the use of cognitive word to describe this new computing.

    These reasoning capabilities, data complexity, and time to value expectations are driving the need for a new class of supercomputer systems such as those investigated in our research group in Barcelona.

    It is required a continuous development of supercomputing systems enabling the convergence of advanced analytic algorithms and big data technologies driving new insights based on the massive amounts of available data. We will use the term “Cognitive Computing” (others use Smart Computing, Intelligent Computing, etc.) to label this new type of computing research.

    We can find different examples of the strides made by cognitive computing in industry. The accuracy of Google’s voice recognition technology, for instance, improved from 84 percent in 2012 to 98 percent less than two years later. DeepFace technology from Facebook can now recognize faces with 97 percent accuracy.

    IBM was able to double the precision of Watson’s answers in the few years leading up to its famous victory in the quiz show Jeopardy. This is a very active scenario.

    From 2011 through to May 2014, over 100 companies in the area merged or were acquired. During this same period, over $2 billion dollars in venture capital funds have been given to companies building cognitive computing products and services.

    Cognitive Computing will improve our lives. Healthcare organizations are using predictive modeling to assist diagnosing patients and identifying risks associated with care. Or farmers are using predictive modeling to manage and protect crops from planting through harvest.

    But there are problems that we need to be conscious of. The first, is the idea that we may be controlled by algorithms that are likely to predict what we are about to do.

    Privacy was the central challenge in the second wave era. In the next wave of Cognitive Computing, the challenge will be safeguarding free will. After Snowden revelations we realize it’s easy to abuse access to data.

    There is another problem. Cognitive Computing is going to challenge white collar, professional knowledge work in the 21st century in the same way that factory automation and the assembly line challenged blue collar labor in the 20th century.

    For instance, one of the Narrative Science’s co-founder estimates that 90 percent of news could be algorithmically generated by the mid-2020s, much of it without human intervention. Or researchers at Oxford published a study estimating that 47 percent of total US employment is “at risk” due to the automation of cognitive tasks.

    Cognitive Computing is going to transform how we live, how we work and how we think, and that’s why Cognitive Computing will be a big deal. Cognitive computing is a powerful tool, but a tool nevertheless – and the humans wielding the tool must decide how to best use it.

     

    photo credits: Robert Course-Baker

    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    Cognitive Computing: Benefits and challenges of the next tech revolution

    Co-authored with Jordi Torres Cognitive Computing is going to transform and improve our lives. But it also presents challenges that we need to be conscious of in order to make the best use of this technology. Big Data technology allows companies to gai [read more]
    byMateo Valero | 13/Apr/20147 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    Co-authored with Jordi Torres

    Cognitive Computing is going to transform and improve our lives. But it also presents challenges that we need to be conscious of in order to make the best use of this technology.

    Big Data technology allows companies to gain the edge over their business competitors and, in many ways, to increase customer benefits. For customers, the influences of big data are far reaching, but the technology is often so subtle that consumers have no idea that big data is actually helping make their lives easier.

    For instance, in the online shopping arena, Amazon’s recommendation engine uses big data and its database of around 250 million customers to suggest products by looking at previous purchases, what other people looking at similar things have purchased, and other variables.

    They are also developing a new technology which predicts what items you might want based on the factors mentioned above and sends it to your nearest delivery hub, meaning faster deliveries for us.

    To do so, they are using predictive models, a collection of mathematical and programming techniques used to determine the probability of future events, analyzing historic and current data to create a model to predict future outcomes.

    Today, predictive models form the basis of many of the things that we do online: search engines, computer translation, voice recognition systems, etc. Thanks to the advent of Big Data these models can be improved, or “trained”, by exposing them to large data sets that were previously unavailable.

    And it is for this reason that we are now at a turning point in the history of computing. Throughout its short history, computing has undergone a number of profound changes with different computing waves.

    In its first wave, computing made numbers computable.

    The second wave has made text and rich media computable and digitally accessible. Nowadays, we are experiencing the next wave that will also make context computable with systems that embed predictive capabilities, providing the right functionality and content at the right time, for the right application, by continuously learning about them and predicting what they will need.

    For example identify and extract context features such as hour, location, task, history or profile to present an information set that is appropriate for a person at a specific time and place.

    The general idea is that instead of instructing a computer what to do, we are going to simply throw data at the problem and tell the computer to figure it out itself.

    We changed the nature of the problem from one in which we tried to explain to the computer how to drive, to one in which we say, “Here’s a lot of data, figure out how to drive yourself”. For this purpose the computer software takes functions from the brain like: inference, prediction, correlation, abstraction, … giving to the systems to possibility to do this by themselves. And here it comes the use of cognitive word to describe this new computing.

    These reasoning capabilities, data complexity, and time to value expectations are driving the need for a new class of supercomputer systems such as those investigated in our research group in Barcelona.

    It is required a continuous development of supercomputing systems enabling the convergence of advanced analytic algorithms and big data technologies driving new insights based on the massive amounts of available data. We will use the term “Cognitive Computing” (others use Smart Computing, Intelligent Computing, etc.) to label this new type of computing research.

    We can find different examples of the strides made by cognitive computing in industry. The accuracy of Google’s voice recognition technology, for instance, improved from 84 percent in 2012 to 98 percent less than two years later. DeepFace technology from Facebook can now recognize faces with 97 percent accuracy.

    IBM was able to double the precision of Watson’s answers in the few years leading up to its famous victory in the quiz show Jeopardy. This is a very active scenario.

    From 2011 through to May 2014, over 100 companies in the area merged or were acquired. During this same period, over $2 billion dollars in venture capital funds have been given to companies building cognitive computing products and services.

    Cognitive Computing will improve our lives. Healthcare organizations are using predictive modeling to assist diagnosing patients and identifying risks associated with care. Or farmers are using predictive modeling to manage and protect crops from planting through harvest.

    But there are problems that we need to be conscious of. The first, is the idea that we may be controlled by algorithms that are likely to predict what we are about to do.

    Privacy was the central challenge in the second wave era. In the next wave of Cognitive Computing, the challenge will be safeguarding free will. After Snowden revelations we realize it’s easy to abuse access to data.

    There is another problem. Cognitive Computing is going to challenge white collar, professional knowledge work in the 21st century in the same way that factory automation and the assembly line challenged blue collar labor in the 20th century.

    For instance, one of the Narrative Science’s co-founder estimates that 90 percent of news could be algorithmically generated by the mid-2020s, much of it without human intervention. Or researchers at Oxford published a study estimating that 47 percent of total US employment is “at risk” due to the automation of cognitive tasks.

    Cognitive Computing is going to transform how we live, how we work and how we think, and that’s why Cognitive Computing will be a big deal. Cognitive computing is a powerful tool, but a tool nevertheless – and the humans wielding the tool must decide how to best use it.

     

    photo credits: Robert Course-Baker

    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark