• Future of the Internet

    Time to Make Content Neutrality into Law

    Nowadays, the main issue are not monopolies, not pricing levels. The issue is free and open space for innovation and the exchange of ideas. A law on internet content neutrality would ensure it.   The previous battle in the war for a free and ope [read more]
    byŽiga Turk | 05/Oct/20175 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    Nowadays, the main issue are not monopolies, not pricing levels. The issue is free and open space for innovation and the exchange of ideas. A law on internet content neutrality would ensure it.

     

    The previous battle in the war for a free and open Internet was about net neutrality — equal access for all to the plumbing level of the Internet. The next battle is about content neutrality — equal access for all to the content level of the internet. Content neutrality is more important than net neutrality. It is not about what speed is available to what service but about what voices are heard and what are suppressed. It should be made into a law.

     

    Net Neutrality
    I was one of those ministers in charge of information society that pushed hard for enshrining net neutrality into Slovenian law and into EU directives. We had some success. While the net neutrality hardliners would not be entirely satisfied, provisions have been made that ask for internet service providers and telecommunication companies that deal with the lower (plumbing) levels of the internet to treat all traffic equally. And not, for example, give faster lanes to Netflix and slower to YouTube, faster to CNBC.com and slower to CNN.com. A policy has been set up that is making sure that the competition among the service providers remains open and fair.

     

    Content Neutrality
    I define content neutrality as such policy of internet service providers that treats content of all users equally. User content is what a user of a service hosts or publishes to the service. Such as videos, writings, tweets, domain name address-books …

    A non-net-neutral internet would discriminate the speed of access to two different services, for example Facebook and Youtube. A non-content-neutral internet would discriminate between different YouTube videos, different Facebook posts, different hosted blogs, different apps in the AppStore, different services running on its cloud, different names in the domain name service … If such a discrimination is not based on technical attributes such as size, processing intensity etc. but is the discrimination based on the meaning of the content then it would constitute a breach of content neutrality.

     

     

    Content neutrality ensures an open and fair competition of ideas.

     

    Real world examples
    A real world analogy to net neutrality would be a highway authority that would offer trucks of one company priority lanes over trucks of another company. Or a post office that would be delivering packages sent by Amazon faster than the packages sent by a small independent merchant. Or an electricity company that would deliver electricity to household A but not to household B.

    A real world analogy to content neutrality would be a highway authority that would be inspecting the cargo on the trucks and allow milk to be transported, because it is good and healthy, but trucks with soda would have to turn around, because some believe drinking sugary drinks is bad for people. Or a post office that would deliver promotional material in favor of candidate A but refuse to deliver material for candidate B. In fact, the Spanish post office just did something like that with mail related to the Catalan referendum on independence. An example would be an electricity company that would deliver electricity to all except those that use it to electrocute animals because the CEO of the electricity company is a vegan.

     

    Host’s dilemma
    The real danger is not that some services or some content or some foods or some activities are prohibited, the real danger is that the companies providing the infrastructure — the hosting of the content or services — are arbitrarily deciding what they will host and what not. Much like the post office deciding it will not be carrying mail if it does not like what is written in the letter. Content neutrality means that all mail and all email is delivered regardless of the content. Net neutrality means that an email from Gmail travels as fast as email from Yahoo Mail. This example should make it clear how much more important content neutrality is than net neutrality.

    While some infrastructure service providers have already stared the practice of not treating all content equally — the notorious examples include de-platforming alt-right content on YouTube and denying Gab app on iTunes and Google Play — I do not believe that the infrastructure providers have much interest in policing the internet for inappropriate content. After all it is not highway authorities that are trying to catch drug traffickers on the highways. It is the police.

    Policing content is an added effort and nuisance for the Googles, Facebooks and Godaddys of the world. It opens them for all kinds of pressures and litigation. It is not their business, it is not their expertise, they should not have that authority. Currently they are caving in to pressure from interest groups and politicians that would like to have some content suppressed without the effort of going to court.

    Some companies are implementing voluntary codes of conduct. I do not believe this is a solution: serious offences and illegal content should not be left to voluntary measures. Legal content should not be subjected to any kind of measures.

     

     

    Arbitrary suppression of content means the end of the competition of ideas, the end of democracy, not to mention the end of open and free internet.

     

    Policy Recommendations
    Politicians should relieve the infrastructure providers and hosting services from the obligation to police their platforms and for the responsibility for the content someone has put there. And more. Infrastructure providers should be required to carry any legal content regardless of its perceived meaning.

    Voluntary codes of conduct should be about conduct and not about content. Shouting from the audience in the middle of a theater performance can be and is prohibited. People that do that are thrown out regardless of what they shout! But the company providing electricity should not decide if play Hamilton deserves its electricity or not. The law enforcement and the courts should police the cyberspace, not the voluntary militias like the Anti Defamation League, nor the algorithms of the infrastructure providers, nor the Wild-West vigilantes.

    Some argue that internet companies should be regulated as utilities and some of what has been suggested above would definitively be solved if they are treated as a utility. But that would be two wide. Digital world is different than the world of utilities of the 20th century. The issue are not monopolies, the issue are not pricing levels. The issue is free and open space for innovation and the exchange of ideas. A law on internet content neutrality would ensure it.

     

    Picture credits: TheNewOldStock

     

     

    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Media

    How the ‘fake news’ crackdown could end up with almighty social networks

    The soon-to-be-appointed EU expert group on fake news should seriously look into the overlooked danger that entrusting social networks with policing hate speech and fake news (for instance through voluntary codes) might actually give them a disproportiona [read more]
    byŽiga Turk | 12/Sep/20176 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    The soon-to-be-appointed EU expert group on fake news should seriously look into the overlooked danger that entrusting social networks with policing hate speech and fake news (for instance through voluntary codes) might actually give them a disproportionate power to shape public opinion.

    For me personally, the most enjoyable moment in that whole “fake news” commotion has been the re-discovery of the concept called truth by the progressives. Finally the pudding of post-modernist relativism was made available for eating. And it did not taste well.

    However, fake news and related phenomena, such as echo chambers and social bots, are a matter of concern for the entire political spectrum. Politicians and media feel challenged or even threatened by it. Some are even suggesting that in order to save democracy we need to regulate social media just like the printed press.

    The issue boils down to the balance between the right of free speech and the danger of false information. There is a growing tendency to make the danger look bigger and the issue of freedom of speech smaller in order to achieve balance and thereby justify more governmental control of the social media at the expense of freedom of speech.

    The advocates of tighter regulation of social media base their argument on a couple of wrong and unproven assumptions.

    The first wrong assumption is the gravity of the problem. It is simply not as bad as that “The functioning of democracies is at stake. Fake news is as dangerous as hate speech and other illegal content.”

    It is not as dangerous as hate speech and it is not illegal. Functioning of democracy is not at stake if two elections made “wrong” decisions.  Good arguments have been given that fake news did not have a serious impact on either the US elections or Brexit. And even if they did. Politics has always played dirty. Information war, lies, deception, false promises are fair game.

    The second wrong assumption is that possession of truth is possible. Most of the stories in mainstream media are supposed to be fact-checked and yet this does not prevent bias or falsehoods. What would be fact-check on a story claiming Iraq does not have WMD in 2003? If would be labelled fake news and suppressed.

    The belief that “the lack of trusted bearings undermines the very structure of society” shows a deep contempt and distrust in the citizens as if they are unable to form an opinion without an authority. In the past this was the Church, then the state and in the future it will be the “fact-checkers”.

    How wrong! Truth is not established by an authority. We are approaching truth in a confrontation of ideas and arguments. This should be preserved without limitations.

    The third wrong assumption is that those in position of truth can be impartial. The war of ideas will simply move from debating the ideas on the Web to the meddling with the “fact-checking” authorities. Who nominates them? Politicians? I am sure they would be happy to.

    Or will they be “experts”? The “reporting” of hate speech is, as we speak, left to the organized soldiers on the internet and bots. The fight is increasingly not about ideas but about how to get Twitter or Facebook close, silence or demote accounts that spread “wrong” arguments.

    The fourth wrong assumption is the attitude towards free speech. Advocates of regulation of social media claim that “freedom of speech is not limitless. It is enjoyed only within some sort of framing, such as ‘enhancing the access to and the diversity and quality of the channels and the content of communication’.” This is wrong. Freedom of speech is limited with other freedoms, not by nice-to-haves diversity and quality!

    They say that “it would be rather naïve to guarantee totally unrestricted freedom of speech to those whose long-term aim is to destroy democracy and its freedoms altogether.” Then the whole idea of the freedom of speech is naïve. If it is not hate speech, if it is not a credible call to commit a crime, if it cannot be privately prosecuted as libel, it has to be free.

     

    The real problem

    In the effort to exaggerate the problem on one hand, and to water down the issue of free speech on the other we are missing a bigger issue. And that is the danger that the authority to control thought and speech is outsourced to the industry. There is also an emerging danger that the “big-social” (Facebook, Twitter, Google, Snap …) will abuse its power to shape public opinion and to form, in bed with big government, a controlled cyberspace environment.

    To make the “big-social” fight the fake news, they would be treated as newspapers. If they are newspapers they can legitimately lean to one or the other political side, as most newspapers do. This would then allow Facebook or Twitter to actively promote certain political parties. If they are forced down that road, image how much worse the echo-chamber problem would get, when the other side organizes their own social network. We will have, for example, the left on Twitter and the right on Gab!

    I am convinced that it is important that the big-social offers a neutral and impartial platform for the exchange of ideas. If anything this is something to regulate – in the direction of content neutrality, transparency of algorithms and of decisions whose accounts are to be disabled or punished in some other way for bad behavior. Internet promised to be an open space for the exchange of ideas. Let’s not ruin that! Let the big-social offer communication platforms and let’s not drag them into policing what people think!

    All that the legislators should demand are that the platforms are available for free and open exchange of ideas. Not “voluntary code of conduct” and not for big-social to “have their own guidelines to clarify users what constitutes illegal hate speech”.

    What is illegal hate speech should be defined by law and enforced by courts. Censorship should not be outsourced to social media companies. If we go down that road we may end up with the alliance of the big-government and big-social to create a controlled and biased cyberspace that would dwarf the worst Orwellian nightmares.

     

    Freedom of fake news

    Freedom of speech includes freedom of fake news. Existing laws for hate speech, libel and copyright infringement should be used against the authors not against the big-social. Measures are needed to strengthen individual responsibility and not to ask the big-social to police the internet. Real name policy should be promoted by labelling content that has real name and thus responsible authors. This is also a cure against the future threat of AI and bots interfering in places where humans socialize. Verified accounts are a good step in this direction.

    The disease of politics are fake politicians, fake policies, fake statistics, fake promises. Fake news are just a symptom. We should be treating the disease. And the best way to make a distinction between the bad and fake and the good and real is through a clash of ideas. The future of our civilization depends on preserving the internet as an open space for a free exchange of ideas. Any kind of ideas.

     

    Picture Credits: ciocci

     

     

    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Media

    What Facebook can learn from video games and press ethics

    Facebook newsroom should take a page from the accumulated experience in hundreds of years of press ethics, and a couple of decades of video games. Its first move should be to be transparent about its news algorithm and its priorities. The tech community [read more]
    byPer Strömbäck | 25/May/20167 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    Facebook newsroom should take a page from the accumulated experience in hundreds of years of press ethics, and a couple of decades of video games. Its first move should be to be transparent about its news algorithm and its priorities.

    The tech community loves to make up laws to describe certain phenomena, such as Moore’s law which predicts growth in computer power and the perhaps more humoristic Godwin’s law which says that any online discussion long enough will end up with one comparing the other to Hitler.

    But in order to understand the digital world, probably the most important of these laws would be Metcalfe’s law.

    It says that the value of a network increases with the square of the number of members (or nodes), which by extension means that the price of staying outside the network increases with every new member.

    This can be good news, for auctions or advert listings it’s convenient to have it all in one place. The downside is of course that it spawns very powerful dominant niche players (cue Vestager vs Google).

    No business knows better how to game Metcalfe’s law than Facebook. With some 1,6 Billion users, the point where anyone realistically can opt-out has been passed long ago.

    Far from the naïve place of birthday greetings and flirty pokes it may have once been, Facebook today is more like an operating system for interaction far beyond the social life: marketers use it to build hype and businesses to interact with customers, but also dictators use it to spread propaganda and terrorist organisatons to distribute beheading videos.

    It cannot be easy to be Mark Zuckerberg: one day your service is believed to bring democracy to the Middle East through its sheer existence, the next you have to travel to Germany to make apologies for Nazi hate speech.

    If you’re a global service, you face the problem of different rules in different jurisdictions. So far, Silicon Valley has successfully played the “safe harbour” card, saying they can’t control what users post. (If all else fails, play the algorithm card – as in “we don’t really know what it does”!).

    This is not really saying “we take no responsibility” but rather a way to make their own rules. Convenient for businesses, problem is other people may disagree. And the deeper you get involved in a culture, the more difficult it gets to surf above the clouds.

    These trends come together as Facebook’s power over the news becomes more evident.

    Depending on what Facebook decides to show in its users’ feeds, it wields a lot of influence over the digital public sphere. The current debate about Facebook’s alleged anti-conservative bias hints at a much bigger issue.

    When we ask “how can we know if the gravitation toward anti-Trump stories is a result of user preference or algorithm settings?”, we’re really asking questions such as: What rules and principles apply to Facebook’s news feed algorithm? Who is the editor in charge? Does Facebook subscribe to normal press ethics such as verifying stories with more than one source and hearing both sides of an issue?

    Such basic things that are taught at every journalism school, developed over decades, centuries even, of free press. Systems of self-regulatory ethics bodies continuously evaluate and evolve these learnings, tweaking which publishing decisions are criticised and which are not.

    The details of the formal systems may vary from country to country, but the principles are the same and around them is a living conversation in the professional journalist community about when to publish a story and when not to, balancing the interests of privacy (for example of crime victims) and the public’s right to information.

    It is tempting to arrive at the conclusion that the internet users should be better advised and not share hoax stories and be sceptic of the sources, but that is the easy way out.

    If journalists with years of education and the ethics of the professional community to draw from find these decisions difficult enough to deserve seminars, ethics committees, even specialist magazines and radio shows, how could we ever expect the average social media user to take such a responsibility?

    The answer will always be that the organisation that delivers the news is responsible for the content. Mass distribution with no editorial responsibility is a recipe for disaster.

    In 2012 in Gothenburg, Sweden, teenagers’ use of social media for sexual bullying and hate speech spiralled out of control and led to beatings and even street fights in what became known as the “Instagram riots”.

    When The Pirate Bay posted autopsy photographs from a court case involving two children who had been murdered with a hammer, much to the horror of the Swedish public and not least the victims’ family, its spokesperson claimed the photographs were on public record and therefore could be distributed without limitation.

    With normal press ethics, neither of these events would have happened. Editors would have stopped them.

    When Wikileaks released diplomatic cables and military files, it exposed horrible abuse but also made public the names of local Western sympathisers, putting them at risk of vengeance from insurgents.

    Edward Snowden learned from this and wisely released his leaks through established news outlets. The recent Panama papers leak is an even better example of responsible journalism, where hundreds of journalists worked together on the material before anything was made public.

    But how can a service like Facebook use any of this?

    It’s their users who post and share the material after all, not Facebook itself. The algorithm aside, Facebook could also learn from video games.

    That’s right, many games offer both discussion forums, user generated content and in-game chat channels. As games companies try to keep a good atmosphere, avoid hate speech and sexism, as a game becomes popular it quickly becomes impossible for the game company to monitor all the content and understand all languages.

    Also the normal functions such as reporting abuse and blocking users are often not enough and can themselves be abused. Instead, many game companies give to selected users moderator privileges, delegating the editorial responsibilities to trusted players. (In fact, this is the same model Google applies to its trouble-shooting forums where users help other users.)

    The beauty is that it can scale infinitely, even with Billions of users. Facebook probably cannot simply copy that model, but it can use it for its newsroom service.

    In traditional media, pluralism is perhaps the most important vaccine against media bias. With plenty of different publications available, there is always another view available. It is no coincidence the Soviet regime preferred to have only one news publication: Pravda (“The Truth” in Russian).

    With the mechanics of Metcalfe’s law, pluralism online becomes a challenge.

    As Facebook benefits particularly from that phenomenon, it has an even greater responsibility to uphold pluralism on its platform. It could start by looking at what has worked for the press and for video games.

    But its first move should be to be transparent about its news algorithm and its priorities. After all, Facebook asks complete transparency of its users.

     

    Picture credits: forzadagro
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Innovation

    What is going on with Google, EU and Italy?

    With its Statement of Objections against Google on Android, the European Commission is rightly exercising its role as guardian of fair competition. Now it's time for Member states to put in place a coordinated effort at EU level on the taxation of big tec [read more]
    byMassimiliano Salini | 17/May/20163 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    With its Statement of Objections against Google on Android, the European Commission is rightly exercising its role as guardian of fair competition. Now it’s time for Member states to put in place a coordinated effort at EU level on the taxation of big tech companies.

     

    “The European Union has the duty to ensure freedom of competition”, only by doing this we can “ensure the innovation that is necessary to the growth of our economy”.

    These words from EU Commissioner for Competition Margrethe Vestager lay out a basic principle that the Union has a responsibility to protect.

    Fair competition and consumer protection translate into lower prices and greater choice for all EU citizens. In addition, they provide the basis for the creation of a single digital market in which European entrepreneurship can prosper.

    To give just two examples: the cost of phone calls in Europe has been reduced considerably compared to ten years ago; and families and business are now able to freely choose their electricity and gas supplier.

    On April 20, the EU published a Statement of Objections against Google in which it claimed that its “Internet search”, mobile operating system (Android) and app store management practices were contrary to European competition law.

    Commissioner Vestager accused the US giant of promoting its products at the expense of its competitors, forcing smartphone producers willing to install the Android operating system to also install Google’s apps.

    This despite the US company’s claim that “Android is an open-source operating system based on open innovation”.

    In the past, the Union has been a strong guardian of fair competition, as in the two cases involving Microsoft (condemned for the lack of free choice related to its web browser and abuse of dominant position) and Intel (sanctioned in 2014 due to its market monopoly in a model of popular processors).

    Given Google’s dominant position, it will be necessary to identify structural remedies, as happened in the past with telecom companies, Microsoft, and other players in similar conditions. We enjoy the results of these remedies every day, with these markets now fully competitive.

    The EU must ensure pluralism in the market so that it can establish a fair level of competition. Only if the rules are the same for everyone will it be possible to give birth to large technology companies.

    The new technologies field is particularly complex and delicate: its huge opportunities must be accompanied by major investments in research and technology.

    Google covers approximately 90% of the smartphone operating system’s market thanks to Android.

    Consequently, it can also dominate the app and online search markets (the two are crucial for advertising sales) as well as the market for videos thanks to Youtube.

    This massive presence means the Mountain View-based company holds the largest share of the online advertising market.

    Thinking about the incredible numbers that all this produces, we must also address the issue of the relation between large hi-tech companies and European tax agencies.

    We are awaiting a European tax regulation: in the meantime, individual States are moving in a random order.

    Google will pay the British treasury a £130 million bill in back taxes, a value that many analysts consider to be too low bearing in mind the amount owed since 2005. France has chosen a different path, seeking as much as €1.6 billion from Google in unpaid taxes.

    What about Italy? Amidst disputes between tax authorities and the judiciary, as well as agreements rejected by the company, the government’s position remains unclear.

     

    Picture credits: David Macchi
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Digital Single Market

    Catherine Bearder: Why tax deals harm our digital economy (and its businesses)

    If governments resort to brokering individual tax deals, such as the recent UK's tax deal with Google, we end up with a race to the bottom that ultimately would be damaging our digital economy, says Lib-Dem MEP Catherine Bearder. Brexit? Complete econom [read more]
    byThe Digital Post | 02/May/20163 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    If governments resort to brokering individual tax deals, such as the recent UK’s tax deal with Google, we end up with a race to the bottom that ultimately would be damaging our digital economy, says Lib-Dem MEP Catherine Bearder. Brexit? Complete economic lunacy.

    What is the added value to the Digital Single Market that the UK might bring if it stays in the EU?

    CBThere are huge opportunities around the corner to be unleashed through the creation of the EU’s digital single market.

    The UK is a world leader in e-commerce, so making it easier for businesses to sell goods and services online across the single market will bring massive benefits to our economy and to British consumers. Leaving the EU now just as we are on the cusp of this digital revolution in Europe would be complete economic lunacy.

     

    What is your opinion about the recent UK’s tax deal with Google?

    The UK Chancellor could and should have got a better deal for the UK taxpayer. It is not acceptable that there is one rule for large multinational companies and another for the small businesses paying their taxes and struggling to get by.

    Companies like Google make an important contribution to jobs and the economy, but that doesn’t mean they should be able to get away with failing to pay their fair share in tax.

    Broadly speaking, what sort of measures should the EU undertake to ensure that multinationals such as Google pay a fair share of tax in each country in which they operate?

    The recent EU agreement to introduce greater transparency over tax deals is an important step forward. But what the history of tax deals in Europe shows us that we need a more coordinated approach to ensure accompanies pay their fair share.

    If governments resort to brokering individual tax deals, we end up with a race to the bottom. The most important underlying principle should be that tax is paid where the actual economic activity takes place.

    This can be a real challenge in the digital sector, but it is one we must overcome if we are to create a level-playing field and a thriving and fair economy.

     
    Picture credits: James Petts
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Innovation

    Europe’s quest for a ‘smart city identity’

    In order to  boost to the creation of smart cities across the EU, we need a clearly defined European ‘smart city model’. The creation of such a model should be the next step in claiming our own European ‘smart city identity’. In the past decade, [read more]
    byPieter Ballon | 20/Jan/20167 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    In order to  boost to the creation of smart cities across the EU, we need a clearly defined European ‘smart city model’. The creation of such a model should be the next step in claiming our own European ‘smart city identity’.

    In the past decade, the Internet has grown exponentially. And the best is yet to come: gradually, the Internet is evolving into a true ‘Internet of Things’ in which nearly all objects that surround us (cars, household appliances, light bulbs, etc.) will be connected.

    As such, the foundation is laid for the creation of ‘smart cities’ in which tens of thousands of sensors and connected devices will optimize the way in which we live and work.

    Smart cities are emerging all over the world. In Asia and the Middle East, some are even built from scratch. Noteworthy examples are the South Korean city of Songdo (a prestigious $35 billion project of which the first phase has been delivered last year) and the planned Masdar City in the United Arab Emirates.

    At the other side of the world, smart cities such as San Francisco are seeing a boost in ‘bottom-up’ smart city developments – with private companies such as Uber (smart mobility), Airbnb (smart tourism) and Google’s Sidewalk Labs (city Wi-Fi hubs) pushing the uptake of smart service platforms.

    At both ends of this spectrum (from greenfield projects in Asia to commercial initiatives in the US) quite some buzz is created – which leads many people to believe that not much is happening in Europe.

    In order to correct that perception and to give a boost to the creation of smart cities in Europe, we need a clearly defined European ‘smart city model’ as well as the proper research methodologies and financial incentives to help mitigate part of the implementation risks.

     

    Smart cities go beyond ‘top-down’ or ‘bottom-up’ platforms

    While ambitious initiatives such as the Songdo project generate lots of interest and are perfect marketing vehicles to attract investments and expertise, they tend to ignore what smart cities are really about: a smart city is not just a prefab machine crammed with the latest technologies; it is a city that lifts quality of life to a totally new dimension by responding to people’s actual needs.

    Also, initiatives such as Songdo tend to contract with one major technology consortium which is then responsible for the city’s backbone, its operations center and the definition of the major end-user services. It is clear, though, that one corporate provider can never provide the variety of services needed in a vibrant, dynamic city.

    At the other side of the world, in the US, ‘bottom-up’ developments lead to commercial smart city offerings that are well-received by end-users. Yet, the emergence of such powerful corporate platforms that disrupt and replace public services, but that are not accountable to citizens, has also raised a number of concerns.

    Already, objections against the ‘exploitation’ of public resources by these new smart city platforms have been voiced.

     

    The European smart city model: putting users’ needs and creativity at the center stage

    European cities such as Amsterdam, Barcelona, Helsinki and Vienna have clearly understood this, and are successfully reinventing themselves – in collaboration with their citizens. They have embraced a model that could be referred to as the ‘City as a Platform’ (CAAP).

    In this model, the public authority remains in the lead of smart city developments, gathering around itself a whole ecosystem of start-ups, SMEs, large firms, non-profits and citizens to jointly create smart cities.

    Yet, in spite of those efforts, a formally-defined European smart city model that could easily be picked up by other European stakeholders does not yet exist. The creation of such a model should be the next step in claiming our own European ‘smart city identity’.

    At the basis of the European smart city model should be the so-called ‘quadruple helix’ – bringing together government, citizens, academia and industry to build smart cities in a way that combines the advantages of the top-down approach (safeguarding public interests) with bottom-up steered creativity.

    For public organizations to remain a central stakeholder in this process – all while putting citizens’ needs center stage – they need to implement an actual Research, Development and Innovation (RDI) role for cities. The European model of Living Labs, where users and producers are brought together on a neutral platform to co-create and test innovations, is ideal for this.

    The European model could thus overcome some important shortcomings of the American and Asian initiatives, securing the upfront buy-in from the people that will actually have to live, work and have fun in tomorrow’s smart cities.

    At the same time, living lab research also allows us to tap directly into citizens’ own creative and valuable ideas, once again securing their buy-in and enthusiasm as they actually become smart city co-creators. Such an approach would make the European smart city model really stand out; all elements are there, we just need to formalize, operationalize and upscale them!

     

    European incentives to help mitigate smart city implementation risks

    Obviously, next to the central role of the users, financial considerations come into play too when building a smart city. Today, innovation support to mitigate risk is mainly granted to very immature technologies.

    Yet, in a smart city context, risk not only resides in technology development, but also in its implementation. While the European Commission has started to acknowledge this, national innovation agencies are still often clinging to old techno-push frameworks.

    In order to make the European CAAP model work, it is critically important that we continue to adapt our innovation programs to this new reality.

    On the one hand, we need to help smart city partners leverage Europe’s experience in living lab research methodologies (through the European Network of Living Labs, for instance) to make sure implementation of new technologies is done first time right; and on the other hand, new measures are required to helping them mitigate the financial risk of smart city deployments too.

    Photo credit: Guy Mayer
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • A conversation with

    Emily O’Reilly: How digital technologies are giving new momentum to European citizenship

    In an exclusive interview released on the sidelines of the Web Summit in Dublin, the European Ombudsman talks with The Digital Post about the relationship between EU citizens and institutions in the time of social media, the impact of tech lobbying and mu [read more]
    byThe Digital Post | 10/Nov/20157 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    In an exclusive interview released on the sidelines of the Web Summit in Dublin, the European Ombudsman talks with The Digital Post about the relationship between EU citizens and institutions in the time of social media, the impact of tech lobbying and much more.

     

    Do you think digital technologies are improving the accountability of the EU institutions and their democratic dimension?

     

    How the EU Ombudsman stand up for new forms of participative democracy based on digital technologies?

     

    U.S. tech companies are the biggest spenders on corporate lobbying in Brussels. Do you see any risk?

     

    Critics argue that Europe’s approach to U.S. tech companies is driven by protectionism. What is your opinion?

     

    A few months ago the EU Ombudsman opened an inquiry into the EC’s handling of the Google antitrust case. How the investigation is progressing?

    Emily O'Reilly was elected as the European Ombudsman in July 2013 and took office on 1 October 2013. She was re-elected in December 2014 for a five year mandate. She is an author and former journalist and broadcaster who became Ireland's first female Ombudsman and Information Commissioner in 2003 and in 2007 she was also appointed Commissioner for Environmental Information.
    
    
    Photo credit:Matt Foster
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Future of the Internet

    The battle to oversee the Web

    Will large emerging countries manage to reshape internet governance around their national interests? One thing is sure: tomorrow's internet will not resemble today's. In recent years, global issues connected to the internet and its uses have vaulted into [read more]
    byJulien Nocetti | 03/Jul/20159 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    Will large emerging countries manage to reshape internet governance around their national interests? One thing is sure: tomorrow’s internet will not resemble today’s.

    In recent years, global issues connected to the internet and its uses have vaulted into the highest realm of high politics. Among these issues internet governance is now one of the most lively and important topics in international relations.

    It has long been ignored and restricted to small silos of experts; however, the leaks disclosed by Edward Snowden on large-scale electronic surveillance implemented by US intelligence agencies triggered a massive response to the historical “stewardship” of the internet by the United States.

    Not surprisingly, the stakes are high: today 2.5 billion people are connected to the internet, and by 2030, the digital economy is likely to represent 20% of the world’s GDP. In emerging countries, the digital economy is growing from 15% to 25% every year.

    Studies evoke 50 even 80 billion connected “things” by 2020. Beyond mere figures, internet governance sharpens everyone’s appetite – from big corporations to governments – for the internet has taken up such a place in our lives and touches on so many issues, such as freedom of expression to privacy, intellectual property rights, and national security.

    It is worth underlining that the issue is particularly complex. For some, the governance of the internet should respect free market rules – a deregulated vision carried by the Clinton-Gore administration in the 1990s –, or remain self-regulated by techno-scientist communities as conceived of by libertarian internet pioneers.

    For others, the advent of the internet in the area of law-making implies a return to the old rules and instruments, but this would mean putting aside the mutations produced by its practices, most importantly the expansion of expression and participation. For others, again, the ultimate legitimization would consist in adopting a Constitution or a Treaty of the internet which would elevate its governance to the global level.

     

    De-Westernizing the internet?

    A number of countries have criticized American “hegemony” over the internet (infrastructure, “critical resources” such as protocols, the domain names system, normative influence, etc.). To a large extent, the internet is the ambivalent product of American culture and the expression of its universalist and expansionist ideology.

    As U.S. policymakers emphasized the importance of winning the battle of ideas both during the Cold War and in the post-2001 period, the ability to transmit America’s soft power via communications networks has been perceived as vital.

    Consequently, in recent years, particularly since the Arab uprisings, governments around the world have become more alert to the disruptive potential of access to digital communications. Demographic factors are also behind calls for change: over the next decade, the internet’s centre of gravity will have moved eastwards.

    Already in 2012, 66% of the world’s internet users lived in the non-Western world. However, the reasons for questioning the U.S.’s supremacy also lie in these countries’ defiance of the current internet governance system, which is accused of favoring the sole interests of the U.S.

    While critical of the status quo, large emerging countries do not constitute a homogeneous block. Back in December 2012 in Dubai, when the Treaty to revise the International Telecommunication Regulations (ITRs) was closely negotiated, some countries such as India, the Philippines and Kenya had rallied behind the U.S.

    The Dubai negotiations nevertheless showed that these “swing states” – countries that have not decided which vision for the future of the internet they will support – are increasingly asserting their vision in order to get things moving.

    Placed under the auspices of the United Nations-led International Telecommunications Union (ITU), the Dubai meeting therefore served as a powerful tribune to both contest American preeminence and call for multilateral internet governance.

    More fundamentally, these tensions reflect another conception of the internet, which lays on a double foundation: on the national level, the claim that states have sovereign power over the management of the internet; and on the international level, the preeminence of states over other stakeholders, and the notion of intergovernmental cooperation to debate internet governance.

    To this end, the arguments developed fit into a geostrategic context which has been reshaped by the emergence of new poles of influence. They are aimed at making the internet an instrument of both the domestic and foreign policies of one country. The preservation of state order, the fight against cybercrime, and the defense of commercial interests are several illustrations of elements that can be used to justify and advance the questioning of the current system.

    China, given its demographic, economic and technological weight, is emblematic of the current “game”. Overall, China has sought to adopt a pragmatic approach: if Beijing does not agree with the concept of the Internet Governance Forum (IGF) – the so-called “multi-stakeholder” principle would not guarantee an equal representation between the different stakeholders and regions of the world. It nevertheless integrated ICANN’s Governments Advisory Committee in 2009, and is now very active in promoting its own standards within the organizations where technical norms are negotiated.

    Russia, for its part, has put forward several initiatives at the U.N. over the last fifteen years – all of which have built upon a firm opposition to the U.S. and have defended a neo-Hobbesian vision in which security considerations and the legitimacy of states to ensure their digital/information sovereignty play a critical role. Moscow has thus been active within U.N. intergovernmental agencies such as ITU, and regional ones such as the Shanghai Cooperation Organization (SCO) and the BRICS forum.

     

    And then came Snowden

    The stances taken by emerging countries unsurprisingly found favorable echoes after Edward Snowden’s revelations in June 2013. If Russia opportunely stood out by granting asylum to Snowden, Brazil promptly expressed its dissatisfaction.

    President Dilma Rousseff, herself a victim of NSA wiretapping, took the lead of a virtuous crusade against the status quo: with the loss of the U.S.’s moral leadership, their stewardship over the agencies which manage the Internet is less tolerated. At the U.N. General Assembly, Rousseff somewhat aggressively criticized Washington, as such showing a will to federate emancipation towards the U.S. dependency.

    Brasilia then intensified its diplomatic offensive by announcing an international summit on Internet governance – called NETmundial – to take place in April 2014 in Sao Paulo. In the meantime, Brazilian authorities promulgated the Marco Civil bill, a sort of Internet Constitution which guarantees freedom of expression, protection of privacy and net neutrality. Is the Brazilian stance in a post-Snowden context purely opportunistic?

    Interestingly, Brazil appears to be taking the middle ground between the two governance “models” that have been under discussion so far – the multi-stakeholders and the multilateral – in a context where the Europeans have stepped aside.

    Since the first World Summit for Information Society (WSIS) in 2005 Brasilia has been promoting free software and advancing a global internet governance model based on its own domestic model. Rousseff’s words fit into a long-term perspective, which sees in the opening of a new international scene – the Web – an opportunity to take the international lead, after the relative failures of former President Lula to position Brazil on international security issues.

    The world is not flat

    Will large emerging countries manage to reshape internet governance around their national interests? In the shift that was the last ITU’s WCIT meeting in Dubai in December 2012, the excessively polarized debates between self-proclaimed partisans of an “open and free” internet and the supporters of a governance resting on territorial sovereignty sparked off a strained discourse over a “digital Cold War” preceding an “internet Yalta”.

    Since Snowden’s revelations emerged, the American reaction has particularly focused on storytelling: since states around the world question the U.S. oversight over the internet, it is because they want to fragment and “balkanize” the global internet – a discourse largely passed on by U.S. Net giants.

    Well, the commercial strategies of the major internet companies themselves tend to intensify the fragmentation of online public spaces by creating distortions in internet users’ access to information and content sharing, that is to say by reducing both the openness and pluralism that have made the internet a great social value.

    Here lies a powerful engine for contest, as it has been recently the case in Western Europe. Borders do reappear where they were not necessarily expected: Google, Apple or Amazon are building their own ecosystem, from which it is becoming hard to get out.

    One thing is sure: tomorrow’s internet will not resemble today’s. Already the power of search engines diminishes the importance of the domain names system; cloud computing, the Internet of things and the spread of mobile internet are starting to radically transform practices and produce new complexities with regards to the internet’s outline and governance.

    It is also certain that the situation will remain at a dead end if the two broad and opposed conceptions of the internet persist: a new space of freedom or a new instrument of control.

     

    Photo credit: Paul Downey

     

    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Digital Single Market

    A glance at the DSM strategy: the good and the not-so-good

    Much of what the Commission proposes goes in the right direction although some actions, such as plans to harmonize copyright, could stir controversy. Even US tech giants might be less worried than expected. On May 6th, more quickly than expected, the Eur [read more]
    byAlicia Richart | 29/May/20155 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    Much of what the Commission proposes goes in the right direction although some actions, such as plans to harmonize copyright, could stir controversy. Even US tech giants might be less worried than expected.

    On May 6th, more quickly than expected, the European Commission released its much anticipated “Digital Single Market Strategy” (DSM).

    The Juncker Commission has made the DSM the top priority of its five-year term, claiming €340 billion in potential economic gains, an exciting figure that should be supported by quantitative research analysis.

    Much of what the Commission proposes in the 20-page document seems to go in the right direction, setting out three main areas to be addressed:

    – Better access to digital goods and services. The Commission claims that delivery costs for physical goods impede e-commerce, pointing the finger to parcel delivery companies; that many sellers use unjustified geo-blocking to avoid serving customers outside their home market; that copyright needs to be modernized; and that VAT compliance for SMEs should be simplified.

    – Creating the right conditions for digital networks and services to flourish by, encouraging investment in infrastructure; replacing national-level management of spectrum with greater coordination at EU level; looking into the behavior of online platforms, including consumer trust and the swift removal of illegal content and personal data management.

    – Maximising the growth potential of our European Digital Economy by, encouraging manufacturing to become smarter; fostering standards for interoperability; making the most of cloud computing and of big data, said to be “the goose that laid the golden eggs”; fostering e-services, including those in the public sector; developing digital skills.

    It is understandable that the Internet provides a channel for businesses to reach consumers more widely than traditional media, both in their own markets and abroad, and for consumers to have a wider choice and bargain-hunt more effectively.

    In a truly single digital market there are opportunities to scale up that are not present in the much smaller national markets.

    More controversial are the commission’s plans to harmonize copyright law, in particular its plan to ban “geo-blocking”, the practice of restricting access to online services based upon the user’s geographical location.

    However, the most problematic point concerns “platforms”: the digital services, such as Amazon, Google, Facebook, Netflix and iTunes on which all sorts of other services can be built upon and which have come to dominate the internet.

    Worried that the mainly American-owned platforms could abuse their market power, the Commission will launch by the end of this year an assessment of their role.

    However the fact that most of the 32 internet platforms identified for assessment by the Commission are American and only one (Spotify) is European, hints more towards the fact that it is harder for new firms to scale up rapidly rather than abuse of market power.

    What it is interesting is that Mark Zuckerberg doesn’t seem to consider a Digital Single Market a disadvantage for Facebook.

    Instead, he supports the idea. Facebook has to deal with different laws in every country and a single set of regulation for the whole European continent would actually make things easier for Facebook.

    The digital economy also depends on the availability of reliable, high-speed and affordable fixed and mobile broadband networks throughout Europe. There are no good reasons to still have national telecom laws in this field.

    How will Europe successfully deploy 5G without enhanced coordination of spectrum assignments between Member States?

    Let us not forget that these networks do not only have an economic value; they are increasingly important for public access to information, freedom of expression, media pluralism, cultural and linguistic diversity.

    The following two pieces of legislation are related to the DSM:

    – The General Data Protection Regulation (GDPR), replacing the 1998 Directive that generated the data protection regimes of 28 Member States, with a single one, was proposed by the Commission in 2012, has undergone amendments by both the EP and the Council of Ministers and could be adopted in 2015 or 2016.

    – The Telecoms Regulation, reviewing the 2002 Telecoms Regulation to cover net neutrality and roaming fees, was proposed by the Commission in 2012, was amended by the EP and is currently with the Council, which has scaled back the EP’s amendments.

    The upcoming negotiations on the Telecoms Single Market will give a hint of the challenges to come in creating a Digital Single Market over the next years.

     

     

    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Digital Single Market

    Why the Google antitrust case could impair Europe’s digital ambitions

    What Europe's tech sector needs is a real pro-growth agenda: bogus antitrust prosecutions against US-based companies will not enhance its ability to innovate and compete — they will only make it more reliant on the whims of bureaucrats. Last week, as C [read more]
    byMassimiliano Trovato | 22/Apr/20154 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    What Europe’s tech sector needs is a real pro-growth agenda: bogus antitrust prosecutions against US-based companies will not enhance its ability to innovate and compete — they will only make it more reliant on the whims of bureaucrats.

    Last week, as Commissioner Vestager stood at the podium to announce that the EU Commission was finally issuing a Statement of Objections regarding Google’s search practices, long-time commentators of EU competition-policy matters thought back to the infamous Microsoft wars of the early 2000s—not just because it made them feel some ten years younger.

    The two cases at hand bear striking similarities: both targeted the most visible company in the computer industry at the time; both were preceded by analogous proceedings in the United States, but ended up following a different path; both focused on extremely sizable market shares, but failed to take into account changes already occurring in the marketplace; and both arguably marked exemplary instances of a politically-oriented approach to the allegedly technical field of competition law.

    Indeed, the current Google case seems to rest on shaky foundations—if anything, an investigation that lagged for five years and survived three settlement attempts should be proof of that. According to the Statement, Google favoured its own comparison shopping service at the expense of competing “vertical” services. However, it’s now hard to tell horizontal from vertical search, as they become increasingly intertwined. Long gone is the era when search engines worked as the internet’s yellow pages—and luckily so.

    Yesterday, Google’s job was to tell users where they could find answers to their questions; today, Google is in the business of providing them: it looks only natural that those should be Google’s own answers.

    But since the Commission launched its investigation, search evolved in another important respect: mobile ate up desktop computer time, apps displaced the browser, the social web did the rest. To be clear, traditional search engines are far from marginal, but they no longer are the primary tools for gathering information, particularly in specialized niches. This applies to Google, as well. For years it’s been said that competition was just a click away: users are now beginning to click away, only in a different direction than expected.

    Irrespective of its merits, the case gained momentum over the last few months. In November, the EU Parliament resolution called for the unbundling of search engines. Just days before last week’s Statement of Objections, digital commissioner Oettinger said Europe’s online businesses were “dependent on a few non-EU players world-wide” and urged to “replace today’s Web search engines, operating systems and social networks”.

    Europe’s digital shortcomings are a reality, but going after US-based tech companies won’t help overcome them.

    In fact, the only industry that stands to benefit from a new season of muscular antitrust enforcement is the lobbying industry. What Europe’s tech sector needs is a real pro-growth agenda: bogus antitrust prosecutions will not enhance its ability to innovate and compete—they will only make it more reliant on the whims of bureaucrats.

     

    photo credits: vango
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Innovation

    Four recommendations for an open and fair smart city

    The big question is this: do administrators and politicians understand what the consequences of the “smartness” they are injecting into public infrastructures? Recommendation 1: Focus on peer-to-peer technology. In its infancy, the Internet's desi [read more]
    byMarleen Stikker | 23/Mar/201515 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    The big question is this: do administrators and politicians understand what the consequences of the “smartness” they are injecting into public infrastructures?

    Recommendation 1: Focus on peer-to-peer technology.

    In its infancy, the Internet’s designers opted for an architecture of distributed communication. This means that, within the network, each node is equal to any other node without the intervention of a central source. Such networks are often called “peer-to-peer” (P2P)—equal to equal. Within these networks, everyone has access to the same tools without having to ask for permission.

    This distributed, horizontal architecture of the Internet has been the determining factor for its disruptive nature. It undermines traditional hierarchies, and provides opportunities for newcomers to upset antiquated business models in no time.

    This leads to an on-going struggle between old and new powers. To keep the Internet open to new entrants and provide everyone the same opportunities, net neutrality remains crucial. Unfortunately, net neutrality often comes under pressure politically, and must be defended from those with ulterior motives.

    The “open” Internet has also been the basis for the explosion of digital social innovation in our modern society. Its structure offers people opportunities to create things of real value through self-organization, sharing, and the production of knowledge and goods.

    Not as isolated individuals, but as networked innovators in contact with peers around the world. Through international cooperation, digital tools have become sophisticated enough to create sustainable and scalable economic models. P2P Foundation keeps track of all these developments on their blog, which I highly recommend.

    The term “sharing economy” often crops up when discussing these new models. But, beware, it is a treacherous term. “Sharing economy” is often used to describe the business model of companies like Facebook, Google, Airbnb, and Uber.

    These companies subscribe to values that are fundamentally different from a “sharing economy”— values that make the term “capitalism platform” seem more appropriate. In the hands of companies like these, Internet users’ data is centrally stored and exploited.

    To assess whether a new service is truly social and reciprocal in nature, you first must analyze its business model. Who is the owner? Is the technology open or closed? What is their policy on data? Is production process a fair one?

    For sustainable economic transformation, it is better to avoid companies driven by shareholder-value, and back organizations driven by social values. That means that procurement processes should favor open and fair technologies and in this way you can maximize the power of social entrepreneurs, citizens, and p2p initiatives.

    Care should always be taken when making policy, and in the development of innovative tools that not only large companies and research institutions are allowed to sit at the table. Small and medium sized companies should be actively involved in these processes.

    In addition to economic development, digital social innovation has the potential to enhance technological literacy. A good example of this potential is The Smart Citizen Kit project.

    The project allows people to measure environmental variables (e.g. air quality, noise levels, etc.) themselves, and to share the information they’ve gathered with others. This produces data (and visualizations of that data) that policymakers and scientists might find interesting.

    More than that, participation in this project helps to increase awareness and understanding of measurement. By being involved in the generation of data, and by using open source hardware and software, people begin to understand that measurement is not an objective process. Such an insight is of great importance in an era where salvation is expected to appear in the form of big data.

    To give meaning to data, we need algorithms to analyze it. And, the results of these analyses often provide arguments for policy.

    But, what are algorithms? Who designs these models? And what is their worldview? Instead of simply focusing on opening up data, we must also focus on opening the computer models and algorithms used to analyze the data that informs the policies upholding our democracy.

     

    Recommendation 2: Be open. Be fair.

    The credo of the “Maker Movement” is: “If you can not open it, you do not own it” (Maker’s Bill of Rights). Yet, while products are becoming smarter, we seem to be getting dumber.

    We can barely open up our smart devices without the risk of destroying them. The concept of “self repair” no longer exists. Take the car, for example. Until recently, a car was something you could repair yourself. All right, maybe not yourself, but surely the neighbor or the garage around the corner could take care of it for you.

    These days, cars house mobile computers that rarely disclose their secrets.

    Starting in 2016, all cars in Europe will be equipped with a black box, called an “e-call,” which will be able to independently contact emergency services. Europe has decided that the car is no longer private property by including a component that you can neither open nor remove, and that constantly keeps watch over you.

    Unfortunately, this applies to many smart city solutions: you can not open them. Before governments make such technologies law in our society, they must be the subject of civil debate.

    New technologies should abide by the values you uphold as a society. Providers should be assessed on questions like: Is the technology based on open hardware, open source, and open data? Is the idea of “Privacy by Design” taken into account? Do they make use of a distributed peer-to-peer model? And, last but not least: Is the production process fair and sustainable?

    It should be a fundamental principle that a government only invests in open technology. Currently, when municipalities have to choose between two administrative systems, there is no semblance of an open market.

    There are usually only two players in the game. You either choose this one, or the other; and—whatever you choose—you’re stuck with it for decades to come. We’d be much better off using systems based on open technology.

    Additionally, we must ensure that publicly purchased technologies are fair technologies. We must realize the suffering that often hides behind many gadgets and technologies.

    Think about the exploitation of children in the mines of the Congo, or the miserable working conditions in China. Not to mention the toll manufacturing process takes on the environment, and the gigantic mountain of e-waste it generates. It is our task to strive for fair technology and build on an economy that puts human rights in the center.

     

    Recommendation 3: Work within the Quadruple Helix model with the citizen as a full partner.

    In one of its reports, the OECD called for better cooperation between government, industry, and academia by bringing all three together in a so-called “Triple Helix.” Since then, all economic advisory bodies are based on the interactions of these three entities.

    The main problem with this model is that society gets completely pushed aside. Here and there, one hears murmurs of the “Quadruple Helix”: the idea that citizens should be central to these decisions.

    Yet, idle thoughts and whispers rarely result in substantial change. Social actors belong at the table, and should be involved in policy and decision making processes.

    Another problem with this model is that innovation does not necessarily originate in large companies and universities. Digital social innovation also comes from the broad, inventive ecosystem of creators, hackers, and social entrepreneurs. In the search for disruptive solutions, we need innovative strategies based on those outside the “Triple Helix”.

     

    Recommendation 4: I’m smart, too.

    Not a day goes by without some sort of Smart City initiative cropping up. The Smart City movement is convinced that technology is the answer to big city problems. But technology is not neutral, and must always be questioned.

    Without technological literacy, we can only consume, and never produce. Only read, never write. If systems are smart, but we remain “stupid,” can we really say that we’ve progressed?

    The major goal of our time is to become smarter and more tech-savvy. This is true not only for the youth, but also for those who currently hold the controls: the people responsible for making policies.

    Smart City technologies introduce a huge dependence on suppliers, and IT departments within the public sector often struggle with the vendor lock-in that can accompany administrative systems. Only the suppliers can read and update their proprietary software.

    So, who will hold the key to the smart city? Administrators, politicians, IT departments? Or the shareholders of companies? The companies that would just as soon sell their SmartCity software to North Korea as they would sell it to the Netherlands?

    The big question is this: do administrators and politicians understand what the consequences of the “smartness” they are injecting into public infrastructures? Take the great promise of “smart lighting,” a showpiece for the energy saving, sustainability agenda. With smart lighting, the light only turns on when someone walks past a sensor. For some people, this provides a sense of security.

    Others find it a sinister thought that someone with bad intentions could be waiting for them in the darkness. Depending on the context, light can mean the difference between life or death.

    At the border between Mexico and America, for instance, simply walking with a flashlight can mean being shot on sight. Smart lighting might save energy, but it introduces a social dilemma. Will we sacrifice safety for the sake of efficiency?

    Let us ask ourselves these questions before we inject technology into the bloodstream of the city, and consider carefully the models and algorithms that will affect our reality.

    Let’s make sure that those who are making decisions about the future of our cities have a real understanding of what technology means. Learn what code is, and the standards and values inherent to it. Only then can you make the right choices.

     

    photo credits: 
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    ‘Green’ data centers: easier & cheaper than you think

    Data centers are formidable energy suckers, accounting for a large share of Internet energy consumption. Worse, they rely heavily on fossil power. Yet slashing their carbon footprint is far from inconceivable. Cloud computing technologies may help. In th [read more]
    byRené Post | 17/Dec/20145 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    Data centers are formidable energy suckers, accounting for a large share of Internet energy consumption. Worse, they rely heavily on fossil power. Yet slashing their carbon footprint is far from inconceivable. Cloud computing technologies may help.

    In this age of ultra-fast fiber, we still need to generate the electrons that will keep the photons flowing with the speed of light across the globe. Estimates show that worldwide around 10% of all electricity generated now is consumed within the digital domain, with a new iPhone consuming more energy than the average refrigerator.

    Datacenters account for around 3%, so the location of datacenters is very much interlinked with the availability of cheap electrical power – naturally or subsidized.

    The energy generation landscape in Europe however is changing quickly. Fossil fuel powered electricity plants in countries like Danmark and Germany are being phased out, and the effects are not limited to countries that embrace the ‘Energiewende’ but ripples across borders, disrupting existing energy markets in ‘slower’ countries like the Netherlands and Belgium.

    But although a new era is clearly looming, Newton’s second law still applies. So a coal power plant in the Netherlands that gets pushed out of business by German wind-power, does not simply close down but draws new plans to attract the aluminium melters of this age: datacenters.

    And why not – isn’t that how we modernize the economy? On paper the marriage between old-style power plants and datacenters looks ideal: coal & nuclear power-plants need baseload-clients, and here they are.

    Compare that with the perceived unpredictability of solar and wind-power, and it seems inevitable that (non-hydro producing) countries that want to play a role in this digital age, are stuck with fossil power generation for a long time to come…

    But is that really so? The underlying assumption that I want to challenge here, is that data-centers can indeed be seen as old-style factories that need a stable energy consumption, but that is not a necessity that is dictated from a technological point of view.

    Let us turn back shortly to the electrons & photons: the economics of electricity transport over longer distances might be terrible, but the economics of transporting data looks way much better.

    Facebook operates an 27.000 m2 data-center near Luleå in North-Sweden – where there are more moose than people and that is several thousands of kilometers away from the larger city centres in Europe where the users are. But that kind of distance is not very relevant on the Internet, since Berlin, Paris and London are just a few miliseconds away.

    But by the same reasoning large solar powered plants in North-Africa, could just as easily provide the European (and African, and Middle-Eastern) Internet with all the photons they need.

    Or what about the North-sea, on a windy winternight – groups of windmills with data-centers directlty built into it? No powerlines to shore, only a few fibers?

    [Tweet “This would very quickly lead to an Internet that would be powered mainly by renewables”]

    The only requisite would be, that these datacenters could share their data, so that they can all deliver the same data in principle, but depended on where the electricity is the most abundant at that moment, that datacenter will take over the main load.

    Oh, and that principle about sharing the data across multiple locations? It might sound high-tech, but actually it is invented a while ago, it is called ‘The Cloud’.

    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark