Attempts to censor alleged “fake news” on the Internet will backfire massively. The main stream media and main stream politicians should rather make a better effort to convince people. The internet is open to them too.
One of the promises of the internet has been that it will bring about better democracy (here and here, for example). Even before the web was invented, Vannevar Bush, the creator of the hypertext concept and the Memex machine expected that science and information will lead to a better society (source).
Since 1990s, when those ideas started to materialize, everybody saw that the internet was vastly increasing the access to information and the ease of connecting people.
The conventional wisdom has been that better informed citizens would be making better political decisions and that the more connected people will also be forging a more tightly connected society. This would both lead to e- (for electronic) or i- (for internet) democracy.
The peak of eDemocracy
In retrospect, it would appear that the peak eDemocracy optimism was reached in 2008 with the election of Barack Obama as the president of the United States.
His was one of the first campaigns where the internet played a major – some would say decisive – role. Facebooks’ revolutions, Ukrainan and Arab Springs, reinforced the hope in the positive change that information technology can bring to the world.
Social media like blogs, Facebook and Twitter were the heroes of the day. Revolutions were won on Twitter and dictators toppled on Facebook.
And then Brexit and Trump won. No longer are the social media the heroes of the day. On the contrary. The internet is now blamed for results that were not what the main stream media and the intelligentsia recommended.
There is an old saying that goes, “On the internet no one knows you are a dog”. On Facebook no one knows your news company has a skyscraper on Manhattan or offices on Fleet Street.
You could be a teenager in Macedonia or an independent writing for Breitbart News or an anonymous blogger. The internet would carry your messages in exactly the same way as if you were a “proper” media.
Social Networks would disseminate news based on enthusiasm of readers’ recommendations, not based on pedigree.
Brexit and Trump
For the first time people’s opinions were largely shaped by their peers not by professional opinion makers and thought leaders. We, the people, were the gatekeepers, not the main stream media.
Greener’s Law – don’t argue with a man who buys ink by the barrel – was proven wrong. It is a version of a saying “you don’t argue with children or the journalists”. The first would in the end throw a stone into your window, the journalist would always have the last word.
Trump was able to wage a frontal war with main stream media and was able to win it. On the Internet, the social media has the last word.
Ending up in the losing side, the main stream media invented excuses and concepts such as fake news and post truth. It had the opposite effect.
People were reminded, on the internet, that it was the old media that has been biased and openly colluded with one of the sides in the UK referendum and US elections. News from main stream media was labelled “fake news” too, just as was from the new media.
Internet as a threat
For the main stream media and main stream politics the internet suddenly fell of grace – it is not a tool of human rights and democracy any more. Free and open internet is not seen as an asset of our democracy but a threat.
Politicians, particularly in Europe, are speaking openly about the threat that Facebook and other social media are for democracy. They are calling for the regulation of social networks (Germany, France, EU).
They would like to ban fake news and make sure that only the properly verified content can be spread by the users. It is tragic to see how happy the internet companies are to comply (Facebook), instead of standing firm and not letting any form of censorship interfere with the free exchange of ideas on their networks.
The established politics and media cannot afford that democratic procedures – with the help of social networks – bring about a wrong result again. In 2017 there will be very important elections in France and Germany and the anxiety is understandable.
But calling results of a democratic election or a referendum wrong is the essence of a failed understanding of democracy and of the impacts of internet on democracy. That it causes wrong results. That democracy reaches wrong decisions.
What happened to the maxim that “in a democracy the people are always right”?
Friction free democracy
Bill Gates famously said that the essential contribution of the internet is that it reduces friction in the economy. That it brings buyers and sellers closer together and is providing more information about each other.
The same that was said about the economic market can be said about the political market. There is less friction between the will of the people and politics. There is more information about the people and about politicians.
It would be wrong to re-introduce friction – with measures that are essentially censorship by some kind of an Orwellian ministry of truth. In Germany an organization called Correctiv will be telling what is the Truth and what is not. In France a panel of old media representatives will be doing the same.
I have no doubt in the good intentions of all that. As I have no doubt that the social media companies are playing along not because of good intentions but because of business interests.
I am just afraid that it will backfire. Backfire massively. And the stakes are simply too high. The very existence of the European Union is hanging by the thread of the French elections. And with the existence of the European Union the existence of European Civilization. It can’t be protected by former superpowers individually.
Use the level playing field
Instead of shaping the internet according to their wishes, the main stream media and main stream politicians should make a better effort to convince people. The internet is open to them too.
They will need to do better than calling someone a fascist or a populist. The net should be used to debate issues not exchange labels and hashtags. It should be used to argue. To speak to people’s fears and dreams. This is not populism, this is democracy.
Will we get a wrong result? When asked if the French Revolution was a positive or a negative event in history, chairman Mao answered that it may be too early to tell.
This may be a post truth story but it helps introduced my point. Which is, it may be too early to tell if Brexit was wrong. I think it was a mistake. But I also think blaming the internet for it is a mistake as well. And drawing policy decisions from this wrong diagnosis would lead to even graver mistakes.
The internet is making democracy more challenging and open. Having friends and support in main stream media is not enough anymore.
People, not just journalists, are gatekeepers and they need to be convinced. So let’s stop bashing Facebook, let’s stop blaming Russian hackers, lets scrap the ideas for censorship of social networks. Let’s stand for the freedom of speech with includes freedom to fake news!
The so called populists thrive on “us” vs. “them” narrative. People have sympathy for the underdogs. They elected Trump and chose Brexit against the better advice of the dominant speech in the main stream media.
If that domination spreads to the social media as well, the job of “populists” would only be easier. Whole internet cannot be controlled. Somewhere they will read how unfair the battle of their David against the enemies’ Goliath is.
Fake news neutrality
Out societies need more trust. And that means trusting people that they will be able to distinguish between true and fake themselves. And trust the idea that true can win over fake without tilting the playing field against the fake.
Let’s trust in the power of true and the weakness of fake enough to keep the internet and the social networks “fake news” neutral and open to all.
Picture credit: AlexaGrace8495
Facebook newsroom should take a page from the accumulated experience in hundreds of years of press ethics, and a couple of decades of video games. Its first move should be to be transparent about its news algorithm and its priorities.
The tech community loves to make up laws to describe certain phenomena, such as Moore’s law which predicts growth in computer power and the perhaps more humoristic Godwin’s law which says that any online discussion long enough will end up with one comparing the other to Hitler.
But in order to understand the digital world, probably the most important of these laws would be Metcalfe’s law.
It says that the value of a network increases with the square of the number of members (or nodes), which by extension means that the price of staying outside the network increases with every new member.
This can be good news, for auctions or advert listings it’s convenient to have it all in one place. The downside is of course that it spawns very powerful dominant niche players (cue Vestager vs Google).
No business knows better how to game Metcalfe’s law than Facebook. With some 1,6 Billion users, the point where anyone realistically can opt-out has been passed long ago.
Far from the naïve place of birthday greetings and flirty pokes it may have once been, Facebook today is more like an operating system for interaction far beyond the social life: marketers use it to build hype and businesses to interact with customers, but also dictators use it to spread propaganda and terrorist organisatons to distribute beheading videos.
It cannot be easy to be Mark Zuckerberg: one day your service is believed to bring democracy to the Middle East through its sheer existence, the next you have to travel to Germany to make apologies for Nazi hate speech.
If you’re a global service, you face the problem of different rules in different jurisdictions. So far, Silicon Valley has successfully played the “safe harbour” card, saying they can’t control what users post. (If all else fails, play the algorithm card – as in “we don’t really know what it does”!).
This is not really saying “we take no responsibility” but rather a way to make their own rules. Convenient for businesses, problem is other people may disagree. And the deeper you get involved in a culture, the more difficult it gets to surf above the clouds.
These trends come together as Facebook’s power over the news becomes more evident.
Depending on what Facebook decides to show in its users’ feeds, it wields a lot of influence over the digital public sphere. The current debate about Facebook’s alleged anti-conservative bias hints at a much bigger issue.
When we ask “how can we know if the gravitation toward anti-Trump stories is a result of user preference or algorithm settings?”, we’re really asking questions such as: What rules and principles apply to Facebook’s news feed algorithm? Who is the editor in charge? Does Facebook subscribe to normal press ethics such as verifying stories with more than one source and hearing both sides of an issue?
Such basic things that are taught at every journalism school, developed over decades, centuries even, of free press. Systems of self-regulatory ethics bodies continuously evaluate and evolve these learnings, tweaking which publishing decisions are criticised and which are not.
The details of the formal systems may vary from country to country, but the principles are the same and around them is a living conversation in the professional journalist community about when to publish a story and when not to, balancing the interests of privacy (for example of crime victims) and the public’s right to information.
It is tempting to arrive at the conclusion that the internet users should be better advised and not share hoax stories and be sceptic of the sources, but that is the easy way out.
If journalists with years of education and the ethics of the professional community to draw from find these decisions difficult enough to deserve seminars, ethics committees, even specialist magazines and radio shows, how could we ever expect the average social media user to take such a responsibility?
The answer will always be that the organisation that delivers the news is responsible for the content. Mass distribution with no editorial responsibility is a recipe for disaster.
In 2012 in Gothenburg, Sweden, teenagers’ use of social media for sexual bullying and hate speech spiralled out of control and led to beatings and even street fights in what became known as the “Instagram riots”.
When The Pirate Bay posted autopsy photographs from a court case involving two children who had been murdered with a hammer, much to the horror of the Swedish public and not least the victims’ family, its spokesperson claimed the photographs were on public record and therefore could be distributed without limitation.
With normal press ethics, neither of these events would have happened. Editors would have stopped them.
When Wikileaks released diplomatic cables and military files, it exposed horrible abuse but also made public the names of local Western sympathisers, putting them at risk of vengeance from insurgents.
Edward Snowden learned from this and wisely released his leaks through established news outlets. The recent Panama papers leak is an even better example of responsible journalism, where hundreds of journalists worked together on the material before anything was made public.
But how can a service like Facebook use any of this?
It’s their users who post and share the material after all, not Facebook itself. The algorithm aside, Facebook could also learn from video games.
That’s right, many games offer both discussion forums, user generated content and in-game chat channels. As games companies try to keep a good atmosphere, avoid hate speech and sexism, as a game becomes popular it quickly becomes impossible for the game company to monitor all the content and understand all languages.
Also the normal functions such as reporting abuse and blocking users are often not enough and can themselves be abused. Instead, many game companies give to selected users moderator privileges, delegating the editorial responsibilities to trusted players. (In fact, this is the same model Google applies to its trouble-shooting forums where users help other users.)
The beauty is that it can scale infinitely, even with Billions of users. Facebook probably cannot simply copy that model, but it can use it for its newsroom service.
In traditional media, pluralism is perhaps the most important vaccine against media bias. With plenty of different publications available, there is always another view available. It is no coincidence the Soviet regime preferred to have only one news publication: Pravda (“The Truth” in Russian).
With the mechanics of Metcalfe’s law, pluralism online becomes a challenge.
As Facebook benefits particularly from that phenomenon, it has an even greater responsibility to uphold pluralism on its platform. It could start by looking at what has worked for the press and for video games.
But its first move should be to be transparent about its news algorithm and its priorities. After all, Facebook asks complete transparency of its users.
Picture credits: forzadagro
Much of what the Commission proposes goes in the right direction although some actions, such as plans to harmonize copyright, could stir controversy. Even US tech giants might be less worried than expected.
On May 6th, more quickly than expected, the European Commission released its much anticipated “Digital Single Market Strategy” (DSM).
The Juncker Commission has made the DSM the top priority of its five-year term, claiming €340 billion in potential economic gains, an exciting figure that should be supported by quantitative research analysis.
Much of what the Commission proposes in the 20-page document seems to go in the right direction, setting out three main areas to be addressed:
– Better access to digital goods and services. The Commission claims that delivery costs for physical goods impede e-commerce, pointing the finger to parcel delivery companies; that many sellers use unjustified geo-blocking to avoid serving customers outside their home market; that copyright needs to be modernized; and that VAT compliance for SMEs should be simplified.
– Creating the right conditions for digital networks and services to flourish by, encouraging investment in infrastructure; replacing national-level management of spectrum with greater coordination at EU level; looking into the behavior of online platforms, including consumer trust and the swift removal of illegal content and personal data management.
– Maximising the growth potential of our European Digital Economy by, encouraging manufacturing to become smarter; fostering standards for interoperability; making the most of cloud computing and of big data, said to be “the goose that laid the golden eggs”; fostering e-services, including those in the public sector; developing digital skills.
It is understandable that the Internet provides a channel for businesses to reach consumers more widely than traditional media, both in their own markets and abroad, and for consumers to have a wider choice and bargain-hunt more effectively.
In a truly single digital market there are opportunities to scale up that are not present in the much smaller national markets.
More controversial are the commission’s plans to harmonize copyright law, in particular its plan to ban “geo-blocking”, the practice of restricting access to online services based upon the user’s geographical location.
However, the most problematic point concerns “platforms”: the digital services, such as Amazon, Google, Facebook, Netflix and iTunes on which all sorts of other services can be built upon and which have come to dominate the internet.
Worried that the mainly American-owned platforms could abuse their market power, the Commission will launch by the end of this year an assessment of their role.
However the fact that most of the 32 internet platforms identified for assessment by the Commission are American and only one (Spotify) is European, hints more towards the fact that it is harder for new firms to scale up rapidly rather than abuse of market power.
What it is interesting is that Mark Zuckerberg doesn’t seem to consider a Digital Single Market a disadvantage for Facebook.
Instead, he supports the idea. Facebook has to deal with different laws in every country and a single set of regulation for the whole European continent would actually make things easier for Facebook.
The digital economy also depends on the availability of reliable, high-speed and affordable fixed and mobile broadband networks throughout Europe. There are no good reasons to still have national telecom laws in this field.
How will Europe successfully deploy 5G without enhanced coordination of spectrum assignments between Member States?
Let us not forget that these networks do not only have an economic value; they are increasingly important for public access to information, freedom of expression, media pluralism, cultural and linguistic diversity.
The following two pieces of legislation are related to the DSM:
– The General Data Protection Regulation (GDPR), replacing the 1998 Directive that generated the data protection regimes of 28 Member States, with a single one, was proposed by the Commission in 2012, has undergone amendments by both the EP and the Council of Ministers and could be adopted in 2015 or 2016.
– The Telecoms Regulation, reviewing the 2002 Telecoms Regulation to cover net neutrality and roaming fees, was proposed by the Commission in 2012, was amended by the EP and is currently with the Council, which has scaled back the EP’s amendments.
The upcoming negotiations on the Telecoms Single Market will give a hint of the challenges to come in creating a Digital Single Market over the next years.
The big question is this: do administrators and politicians understand what the consequences of the “smartness” they are injecting into public infrastructures?
Recommendation 1: Focus on peer-to-peer technology.
In its infancy, the Internet’s designers opted for an architecture of distributed communication. This means that, within the network, each node is equal to any other node without the intervention of a central source. Such networks are often called “peer-to-peer” (P2P)—equal to equal. Within these networks, everyone has access to the same tools without having to ask for permission.
This distributed, horizontal architecture of the Internet has been the determining factor for its disruptive nature. It undermines traditional hierarchies, and provides opportunities for newcomers to upset antiquated business models in no time.
This leads to an on-going struggle between old and new powers. To keep the Internet open to new entrants and provide everyone the same opportunities, net neutrality remains crucial. Unfortunately, net neutrality often comes under pressure politically, and must be defended from those with ulterior motives.
The “open” Internet has also been the basis for the explosion of digital social innovation in our modern society. Its structure offers people opportunities to create things of real value through self-organization, sharing, and the production of knowledge and goods.
Not as isolated individuals, but as networked innovators in contact with peers around the world. Through international cooperation, digital tools have become sophisticated enough to create sustainable and scalable economic models. P2P Foundation keeps track of all these developments on their blog, which I highly recommend.
The term “sharing economy” often crops up when discussing these new models. But, beware, it is a treacherous term. “Sharing economy” is often used to describe the business model of companies like Facebook, Google, Airbnb, and Uber.
These companies subscribe to values that are fundamentally different from a “sharing economy”— values that make the term “capitalism platform” seem more appropriate. In the hands of companies like these, Internet users’ data is centrally stored and exploited.
To assess whether a new service is truly social and reciprocal in nature, you first must analyze its business model. Who is the owner? Is the technology open or closed? What is their policy on data? Is production process a fair one?
For sustainable economic transformation, it is better to avoid companies driven by shareholder-value, and back organizations driven by social values. That means that procurement processes should favor open and fair technologies and in this way you can maximize the power of social entrepreneurs, citizens, and p2p initiatives.
Care should always be taken when making policy, and in the development of innovative tools that not only large companies and research institutions are allowed to sit at the table. Small and medium sized companies should be actively involved in these processes.
In addition to economic development, digital social innovation has the potential to enhance technological literacy. A good example of this potential is The Smart Citizen Kit project.
The project allows people to measure environmental variables (e.g. air quality, noise levels, etc.) themselves, and to share the information they’ve gathered with others. This produces data (and visualizations of that data) that policymakers and scientists might find interesting.
More than that, participation in this project helps to increase awareness and understanding of measurement. By being involved in the generation of data, and by using open source hardware and software, people begin to understand that measurement is not an objective process. Such an insight is of great importance in an era where salvation is expected to appear in the form of big data.
To give meaning to data, we need algorithms to analyze it. And, the results of these analyses often provide arguments for policy.
But, what are algorithms? Who designs these models? And what is their worldview? Instead of simply focusing on opening up data, we must also focus on opening the computer models and algorithms used to analyze the data that informs the policies upholding our democracy.
Recommendation 2: Be open. Be fair.
The credo of the “Maker Movement” is: “If you can not open it, you do not own it” (Maker’s Bill of Rights). Yet, while products are becoming smarter, we seem to be getting dumber.
We can barely open up our smart devices without the risk of destroying them. The concept of “self repair” no longer exists. Take the car, for example. Until recently, a car was something you could repair yourself. All right, maybe not yourself, but surely the neighbor or the garage around the corner could take care of it for you.
These days, cars house mobile computers that rarely disclose their secrets.
Starting in 2016, all cars in Europe will be equipped with a black box, called an “e-call,” which will be able to independently contact emergency services. Europe has decided that the car is no longer private property by including a component that you can neither open nor remove, and that constantly keeps watch over you.
Unfortunately, this applies to many smart city solutions: you can not open them. Before governments make such technologies law in our society, they must be the subject of civil debate.
New technologies should abide by the values you uphold as a society. Providers should be assessed on questions like: Is the technology based on open hardware, open source, and open data? Is the idea of “Privacy by Design” taken into account? Do they make use of a distributed peer-to-peer model? And, last but not least: Is the production process fair and sustainable?
It should be a fundamental principle that a government only invests in open technology. Currently, when municipalities have to choose between two administrative systems, there is no semblance of an open market.
There are usually only two players in the game. You either choose this one, or the other; and—whatever you choose—you’re stuck with it for decades to come. We’d be much better off using systems based on open technology.
Additionally, we must ensure that publicly purchased technologies are fair technologies. We must realize the suffering that often hides behind many gadgets and technologies.
Think about the exploitation of children in the mines of the Congo, or the miserable working conditions in China. Not to mention the toll manufacturing process takes on the environment, and the gigantic mountain of e-waste it generates. It is our task to strive for fair technology and build on an economy that puts human rights in the center.
Recommendation 3: Work within the “Quadruple Helix” model with the citizen as a full partner.
In one of its reports, the OECD called for better cooperation between government, industry, and academia by bringing all three together in a so-called “Triple Helix.” Since then, all economic advisory bodies are based on the interactions of these three entities.
The main problem with this model is that society gets completely pushed aside. Here and there, one hears murmurs of the “Quadruple Helix”: the idea that citizens should be central to these decisions.
Yet, idle thoughts and whispers rarely result in substantial change. Social actors belong at the table, and should be involved in policy and decision making processes.
Another problem with this model is that innovation does not necessarily originate in large companies and universities. Digital social innovation also comes from the broad, inventive ecosystem of creators, hackers, and social entrepreneurs. In the search for disruptive solutions, we need innovative strategies based on those outside the “Triple Helix”.
Recommendation 4: I’m smart, too.
Not a day goes by without some sort of Smart City initiative cropping up. The Smart City movement is convinced that technology is the answer to big city problems. But technology is not neutral, and must always be questioned.
Without technological literacy, we can only consume, and never produce. Only read, never write. If systems are smart, but we remain “stupid,” can we really say that we’ve progressed?
The major goal of our time is to become smarter and more tech-savvy. This is true not only for the youth, but also for those who currently hold the controls: the people responsible for making policies.
Smart City technologies introduce a huge dependence on suppliers, and IT departments within the public sector often struggle with the vendor lock-in that can accompany administrative systems. Only the suppliers can read and update their proprietary software.
So, who will hold the key to the smart city? Administrators, politicians, IT departments? Or the shareholders of companies? The companies that would just as soon sell their SmartCity software to North Korea as they would sell it to the Netherlands?
The big question is this: do administrators and politicians understand what the consequences of the “smartness” they are injecting into public infrastructures? Take the great promise of “smart lighting,” a showpiece for the energy saving, sustainability agenda. With smart lighting, the light only turns on when someone walks past a sensor. For some people, this provides a sense of security.
Others find it a sinister thought that someone with bad intentions could be waiting for them in the darkness. Depending on the context, light can mean the difference between life or death.
At the border between Mexico and America, for instance, simply walking with a flashlight can mean being shot on sight. Smart lighting might save energy, but it introduces a social dilemma. Will we sacrifice safety for the sake of efficiency?
Let us ask ourselves these questions before we inject technology into the bloodstream of the city, and consider carefully the models and algorithms that will affect our reality.
Let’s make sure that those who are making decisions about the future of our cities have a real understanding of what technology means. Learn what code is, and the standards and values inherent to it. Only then can you make the right choices.
Data centers are formidable energy suckers, accounting for a large share of Internet energy consumption. Worse, they rely heavily on fossil power. Yet slashing their carbon footprint is far from inconceivable. Cloud computing technologies may help.
In this age of ultra-fast fiber, we still need to generate the electrons that will keep the photons flowing with the speed of light across the globe. Estimates show that worldwide around 10% of all electricity generated now is consumed within the digital domain, with a new iPhone consuming more energy than the average refrigerator.
Datacenters account for around 3%, so the location of datacenters is very much interlinked with the availability of cheap electrical power – naturally or subsidized.
The energy generation landscape in Europe however is changing quickly. Fossil fuel powered electricity plants in countries like Danmark and Germany are being phased out, and the effects are not limited to countries that embrace the ‘Energiewende’ but ripples across borders, disrupting existing energy markets in ‘slower’ countries like the Netherlands and Belgium.
But although a new era is clearly looming, Newton’s second law still applies. So a coal power plant in the Netherlands that gets pushed out of business by German wind-power, does not simply close down but draws new plans to attract the aluminium melters of this age: datacenters.
And why not – isn’t that how we modernize the economy? On paper the marriage between old-style power plants and datacenters looks ideal: coal & nuclear power-plants need baseload-clients, and here they are.
Compare that with the perceived unpredictability of solar and wind-power, and it seems inevitable that (non-hydro producing) countries that want to play a role in this digital age, are stuck with fossil power generation for a long time to come…
But is that really so? The underlying assumption that I want to challenge here, is that data-centers can indeed be seen as old-style factories that need a stable energy consumption, but that is not a necessity that is dictated from a technological point of view.
Let us turn back shortly to the electrons & photons: the economics of electricity transport over longer distances might be terrible, but the economics of transporting data looks way much better.
Facebook operates an 27.000 m2 data-center near Luleå in North-Sweden – where there are more moose than people and that is several thousands of kilometers away from the larger city centres in Europe where the users are. But that kind of distance is not very relevant on the Internet, since Berlin, Paris and London are just a few miliseconds away.
But by the same reasoning large solar powered plants in North-Africa, could just as easily provide the European (and African, and Middle-Eastern) Internet with all the photons they need.
Or what about the North-sea, on a windy winternight – groups of windmills with data-centers directlty built into it? No powerlines to shore, only a few fibers?
[Tweet “This would very quickly lead to an Internet that would be powered mainly by renewables”]
The only requisite would be, that these datacenters could share their data, so that they can all deliver the same data in principle, but depended on where the electricity is the most abundant at that moment, that datacenter will take over the main load.
Oh, and that principle about sharing the data across multiple locations? It might sound high-tech, but actually it is invented a while ago, it is called ‘The Cloud’.