The soon-to-be-appointed EU expert group on fake news should seriously look into the overlooked danger that entrusting social networks with policing hate speech and fake news (for instance through voluntary codes) might actually give them a disproportionate power to shape public opinion.
For me personally, the most enjoyable moment in that whole “fake news” commotion has been the re-discovery of the concept called truth by the progressives. Finally the pudding of post-modernist relativism was made available for eating. And it did not taste well.
However, fake news and related phenomena, such as echo chambers and social bots, are a matter of concern for the entire political spectrum. Politicians and media feel challenged or even threatened by it. Some are even suggesting that in order to save democracy we need to regulate social media just like the printed press.
The issue boils down to the balance between the right of free speech and the danger of false information. There is a growing tendency to make the danger look bigger and the issue of freedom of speech smaller in order to achieve balance and thereby justify more governmental control of the social media at the expense of freedom of speech.
The advocates of tighter regulation of social media base their argument on a couple of wrong and unproven assumptions.
The first wrong assumption is the gravity of the problem. It is simply not as bad as that “The functioning of democracies is at stake. Fake news is as dangerous as hate speech and other illegal content.”
It is not as dangerous as hate speech and it is not illegal. Functioning of democracy is not at stake if two elections made “wrong” decisions. Good arguments have been given that fake news did not have a serious impact on either the US elections or Brexit. And even if they did. Politics has always played dirty. Information war, lies, deception, false promises are fair game.
The second wrong assumption is that possession of truth is possible. Most of the stories in mainstream media are supposed to be fact-checked and yet this does not prevent bias or falsehoods. What would be fact-check on a story claiming Iraq does not have WMD in 2003? If would be labelled fake news and suppressed.
The belief that “the lack of trusted bearings undermines the very structure of society” shows a deep contempt and distrust in the citizens as if they are unable to form an opinion without an authority. In the past this was the Church, then the state and in the future it will be the “fact-checkers”.
How wrong! Truth is not established by an authority. We are approaching truth in a confrontation of ideas and arguments. This should be preserved without limitations.
The third wrong assumption is that those in position of truth can be impartial. The war of ideas will simply move from debating the ideas on the Web to the meddling with the “fact-checking” authorities. Who nominates them? Politicians? I am sure they would be happy to.
Or will they be “experts”? The “reporting” of hate speech is, as we speak, left to the organized soldiers on the internet and bots. The fight is increasingly not about ideas but about how to get Twitter or Facebook close, silence or demote accounts that spread “wrong” arguments.
The fourth wrong assumption is the attitude towards free speech. Advocates of regulation of social media claim that “freedom of speech is not limitless. It is enjoyed only within some sort of framing, such as ‘enhancing the access to and the diversity and quality of the channels and the content of communication’.” This is wrong. Freedom of speech is limited with other freedoms, not by nice-to-haves diversity and quality!
They say that “it would be rather naïve to guarantee totally unrestricted freedom of speech to those whose long-term aim is to destroy democracy and its freedoms altogether.” Then the whole idea of the freedom of speech is naïve. If it is not hate speech, if it is not a credible call to commit a crime, if it cannot be privately prosecuted as libel, it has to be free.
The real problem
In the effort to exaggerate the problem on one hand, and to water down the issue of free speech on the other we are missing a bigger issue. And that is the danger that the authority to control thought and speech is outsourced to the industry. There is also an emerging danger that the “big-social” (Facebook, Twitter, Google, Snap …) will abuse its power to shape public opinion and to form, in bed with big government, a controlled cyberspace environment.
To make the “big-social” fight the fake news, they would be treated as newspapers. If they are newspapers they can legitimately lean to one or the other political side, as most newspapers do. This would then allow Facebook or Twitter to actively promote certain political parties. If they are forced down that road, image how much worse the echo-chamber problem would get, when the other side organizes their own social network. We will have, for example, the left on Twitter and the right on Gab!
I am convinced that it is important that the big-social offers a neutral and impartial platform for the exchange of ideas. If anything this is something to regulate – in the direction of content neutrality, transparency of algorithms and of decisions whose accounts are to be disabled or punished in some other way for bad behavior. Internet promised to be an open space for the exchange of ideas. Let’s not ruin that! Let the big-social offer communication platforms and let’s not drag them into policing what people think!
All that the legislators should demand are that the platforms are available for free and open exchange of ideas. Not “voluntary code of conduct” and not for big-social to “have their own guidelines to clarify users what constitutes illegal hate speech”.
What is illegal hate speech should be defined by law and enforced by courts. Censorship should not be outsourced to social media companies. If we go down that road we may end up with the alliance of the big-government and big-social to create a controlled and biased cyberspace that would dwarf the worst Orwellian nightmares.
Freedom of fake news
Freedom of speech includes freedom of fake news. Existing laws for hate speech, libel and copyright infringement should be used against the authors not against the big-social. Measures are needed to strengthen individual responsibility and not to ask the big-social to police the internet. Real name policy should be promoted by labelling content that has real name and thus responsible authors. This is also a cure against the future threat of AI and bots interfering in places where humans socialize. Verified accounts are a good step in this direction.
The disease of politics are fake politicians, fake policies, fake statistics, fake promises. Fake news are just a symptom. We should be treating the disease. And the best way to make a distinction between the bad and fake and the good and real is through a clash of ideas. The future of our civilization depends on preserving the internet as an open space for a free exchange of ideas. Any kind of ideas.
Picture Credits: ciocci
The challenges posed to our democracies by “fake news,” hate speech, and incitement to violence are matters of deep concern. But laws that undermine individuals’ due process rights and co-opt private companies into the censorship apparatus for the state are not the way to defend democratic societies.
Anticipating federal elections in September, Germany’s Minister of Justice last month proposed a new law aimed at limiting the spread of hate speech and “fake news” on social media sites.
But the proposal, called the “Social Network Enforcement Bill” or “NetzDG,” goes far beyond a mere encouragement for social media platforms to respond quickly to hoaxes and disinformation campaigns and would create massive incentives for companies to censor a broad range of speech.
The NetzDG scopes very broadly: It would apply not only to social networking sites but to any other service that enables users to “exchange or share any kind of content with other users or make such content accessible to other users.”
That would mean that email providers such as Gmail and ProtonMail, web hosting companies such as Greenhost and 1&1, remote storage services such as Dropbox, and any other interactive website could fall within the bill’s reach.
Under the proposal, providers would be required to promptly remove “illegal” speech from their services or face fines of up to 50 million euros. NetzDG would require providers to respond to complaints about “Violating Content,” defined as material that violates one of 24 provisions of the German Criminal Code.
These provisions cover a wide range of topics and reveal prohibitions against speech in German law that may come as a surprise to the international community, including prohibitions against defamation of the President (Sec. 90), the state, and its symbols (Sec. 90a); defamation of religions (Sec. 166); distribution of pornographic performances (Sec. 184d); and dissemination of depictions of violence (Sec. 131).
NetzDG would put online service providers in the position of a judge, requiring that they accept notifications from users about allegedly “Violating Content” and render a decision about whether that content violates the German Criminal Code. Providers would be required to remove “obvious” violations of the Code within 24 hours and resolve all other notifications within 7 days.
Providers are also instructed to “delete or block any copies” of the “Violating Content,” which would require providers not only to remove content at a specified URL but to filter all content on their service.
The approach of this bill is fundamentally inconsistent with maintaining opportunities for freedom of expression and access to information online. Requiring providers to interpret the vagaries of 24 provisions of the German Criminal Code is a massive burden.
Determining whether a post violates a given law is a complex question that requires deep legal expertise and analysis of relevant context, something private companies are not equipped to do, particularly at mass scale. Adding similar requirements to apply the law of every country in which these companies operate (or risk potentially bankrupting fines) would be unsustainable.
The likely response from hosts of user-generated content would be to err on the side of caution and take down any flagged content that broaches controversial subjects such as religion, foreign policy, and opinions about world leaders. And individuals – inside and outside of Germany – would likely have minimal access to a meaningful remedy if a provider censors their lawful speech under NetzDG.
The proposal is also completely out of sync with international standards for promoting free expression online. It has long been recognized that limiting liability for intermediaries is a key component to support a robust online speech environment. As then-Special Rapporteur for Freedom of Expression, Frank La Rue, noted in his 2011 report:
“Holding intermediaries liable for the content disseminated or created by their users severely undermines the enjoyment of the right to freedom of opinion and expression, because it leads to self-protective and over-broad private censorship, often without transparency and the due process of the law.”
The Council of Europe has likewise cautioned against the consequences of shifting the burden to intermediaries to determine what speech is illegal, in conjunction with the report it commissioned in 2016 on comparative approaches to blocking, filtering, and takedown of content: “[T]he decision on what constitutes illegal content is often delegated to private entities, which in order to avoid being held liable for transmission of illegal content may exercise excessive control over information accessible on the Internet.”
Shielding intermediaries from liability for third-party content is the first of the Manila Principles on Intermediary Liability, a set of principles supported by more than 100 civil society organizations worldwide. The Manila Principles further caution that “Intermediaries must not be required to restrict content unless an order has been issued by an independent and impartial judicial authority that has determined that the material at issue is unlawful.” It is a mistake to force private companies to be judge, jury, and executioner for controversial speech.
CDT recommends that the German legislature reject this proposed measure. It clearly impinges on fundamental rights to free expression and due process. The challenges posed to our democracies by “fake news,” hate speech, and incitement to violence are matters of deep concern.
But laws that undermine individuals’ due process rights and co-opt private companies into the censorship apparatus for the state are not the way to defend democratic societies. Governments must work with industry and civil society to address these problems without undermining fundamental rights and the rule of law.
Picture credits: Medienfilter.de
Even though ‘traditional’ public service TV and radio remain very popular, we want to consolidate the important role public service media has to play in the digital environment, says Nicola Frank, Head of European Affairs of the EBU. Here’s her opinion on the main legislative proposals of the Digital Single Market strategy.
The Digital Post: What are the priority issues for EBU on the EU digital policy agenda?
Nicola Frank: Digital Single Market policies are crucial because they will impact the way programmes are licensed, distributed and presented to viewers. We want to make sure that our content reaches citizens on all devices, which calls for licensing tools which are fit for the digital environment as well as rules ensuring that all relevant networks carry our programmes, and significant platforms and interfaces display our services prominently to users. This is very important for cultural diversity and media pluralism.
The way citizens access TV and radio programmes has evolved extremely fast in recent years. Even though ‘traditional’ public service TV and radio remain very popular – reaching 59% and 44% of Europeans respectively every week – we want to consolidate the important role public service media has to play in the digital environment. This is very much at the heart of EBU members’ strategies today.
As part of its digital strategy, the EU already made a very important step towards effective net neutrality. Now we need to build on this first important stepping stone with the recent proposals on the audiovisual media services directive, the telecoms review and the copyright proposals.
TDP: What are the challenges for broadcasters in the recent telecoms review?
NF: From our perspective, the Telecoms review will impact the way our programmes are distributed on the various electronic communication networks – Digital Terrestrial Television, satellite, cable and IPTV. There is an opportunity within this review to strengthen the tools Member States have at their disposal to ensure that public service media programmes can be accessed on all key networks and on various devices. For example, ‘must-carry rules’ should be updated to match the fact that there are more means to distribute programmes and more on offer today, in particular interactive and on-demand services.
TDP: The copyright proposal has been criticised by many in Brussels and you are one of the few being quite positive, why is this?
NF: Yes, the proposal for a Regulation on broadcasters’ online content has caused quite an interesting reaction. Having analysed the proposal, we believe the Commission’s plans represent a balanced licensing solution. Effective licensing mechanisms are essential because assembling and distributing programmes implies that public service media organizations navigate through complex negotiations to obtain all the necessary licenses.
The proposal confirms contractual freedom and is in line with territoriality, principles which are at the very heart of the content-funding model. It should however be possible, for example, for Europeans who reside outside their homeland to access programmes from back home when they go online. When broadcasters wish to make a programme available across borders, then there should be adequate licensing tools out there to turn this will into a reality.
TDP: As part of the copyright discussions, broadcasters regularly mention the Satellite and Cable Directive. Where does that fit in?
NF: The Satellite and Cable Directive of 1993 is an interesting model because it has unlocked access to broadcasters’ programmes across borders on satellite and cable networks. It introduced effective licensing mechanisms for satellite transmissions and retransmissions on cable networks, which have shown that territoriality can co-exist with the Internal Market. For example, the Italian public channel RAI 1 is available in 20 EU Member States via cable with the exception of certain premium content, and those of us living here in Brussels can watch Sherlock on the BBC on Belgian cable without any problem. Around 1500 free-to-air satellite channels without encryption are available across Europe.
TDP: How are public broadcasters impacted by the proposals to update the AVMS Directive published earlier this year? From your point of you, how could the proposal be improved?
NF: The AVMS Directive covers subjects which are of major importance for public service media: informed citizenship, the protection of minors from harmful content and the promotion of European and domestic programmes to name but a few. They represent fundamental objectives for European audiovisual media policies. But what has changed is how these objectives are met in the digital environment.
The audiovisual media services Directive should be updated to ensure that valuable content for society is prominently displayed and easily accessed where citizens go to get audiovisual programmes in today’s digital environment. We want our contribution to society to be effective in this rapidly-evolving audiovisual landscape: we offer impartial and diverse information, a gateway to European content – over 80% of our EBU members’ airtime – and safe, informative spaces for users, especially minors.
Facilitating access to our members’ programmes is all the more important because powerful and VOD and OTT providers’ impact on the individual viewers’ choice and consumption is growing steadily. The Audiovisual media services Directive needs to give Member States the possibility to address access and appropriate prominence of public service media programmes.
The role of video-sharing platforms and social media also needs to be examined. Obviously, you cannot regulate them like audiovisual media service providers who exercise editorial responsibility. But there needs to be a basic set of rules to protect minors and tackle hate speech because of the importance of these platforms in the digital environment, in particular for younger audiences.
Picture credits: Pierre Metivier
The proposed revision of the the audiovisual media services directive (AVMSD) is expected to be opposed by online service providers and kindred spirits. Here’s why.
As part of the digital single market strategy (which is just over a year old now), The European Commission published six proposals on 25 May. A keenly awaited file among these, is a revision of the audiovisual media services directive (AVMSD).
This is the legislation that governs national rules on all audiovisual media content. This is not just about television, it also includes online portals and on-demand services.
The AVMSD has taken various forms over the years whilst adapting to the ongoing changes to the technological environment. Since the initial adoption of the Television without Frontiers Directive back in the 1980s the idea has been to create a harmonised single market for audiovisual content whilst ensuring some key principles.
These include technological neutrality, freedom of reception and retransmission and flexibility for Member States to provide more detailed and stricter rules than specified in the AVMSD.
Market developments, notably the rise of the online world, made it necessary to revisit the rules and amend the framework. With the last revision, the Directive was renamed and extended to include not only the traditional television content but also non-linear services (such as “on-demand” and internet services) providing television-like audiovisual content. This would now include providers like Netflix.
The proposal adopted on 25 May by the Commission has proposed several controversial changes such as the rules of prominence, advertising time limits and protection of minors.
The changes to the scope indicate video-hosting portals, such as YouTube, will be included as it proposes adding the following:
– a definition for ‘video-sharing platform services’ to the scope
(Article 1 a bis in the draft),
– the wording ‘videos of short duration’ to what constitutes a programme
(Article 1 b in the draft),
– a definition of a video-sharing platform provider as a media service provider
(Article 1 d bis in the draft),
– a provision specific to video-sharing platforms.
This is something that has previously not happened due to editorial responsibility not being part of the remit. However, this proposal does seem to be in line with comments from the Juncker Commission about tackling the barriers between online and offline providers.
The EU is aiming to create a single, pan-European market encompassing all digital services and thus it is unsurprising that the rules for online services are to be reinforced.
For instance, the Commission proposes a common quota at EU level, taking account of the fact that many member states have already been implementing their own national quotas for European works. For instance, in Spain and Austria, there is an obligation to reserve 30% and 50% (respectively) of their “on-demand” services catalogues for European works.
In the current AVMSD, a 10% share of the content broadcast must be European works. According to the leaked document, this has now changed so linear (television) and non-linear services providers must ensure that 20 % of their catalogues are European works. A report by the Commission from 2010 demonstrated a high share of European works in catalogues across Europe. For instance, Denmark reported in 2009, 88.9% of its on-demand catalogues consisted of European works.
In addition, the proposal sees a provision where Member States will be able to impose financial contributions to “on-demand” services for local content – a sort of European content tax.
The providers will be required to contribute financially to the production of European works, including direct investment in content or contributions to national funds. What this means in practice remains to be seen.
However, it begs the question of whether this will be an alternative to offering a specific share of European works in catalogues. Will the documented approach by Czech Republic and Italy become the ‘get out of jail free card’ for some providers?
The proposed Directive also allows Member States to oblige “on-demand” service providers to target audiences in their territories, but established in another Member State, to make such financial contributions on the revenues made in the targeted Member State.
Albeit in this case, the provider would only be required to contribute if it was not subject to an equivalent contribution in the Member State it is established in. For example, if Netflix maintains its headquarters in the Netherlands but is not obliged by the Dutch government to offer a financial contribution for the production of European works, and at the same time also targets a Belgian audience, Belgium could potentially seek a fiscal contribution from Netflix.
Netflix and other internet services captured in the scope, fear this proposal will damage their business model. Many platforms and portals pride themselves on having algorithms which tailor content according to the consumer’s taste. If a company has to financially invest in the production of European works and make these readily available on its platform, a personalized service will no longer work.
Additional requirements which may cause a stir include:
– stricter rules on protection of minors for television and on-demand services and specifically measures for on-demand services to put in place age-verification tools such as encryption and PIN codes,
– a possible daily limit of advertising between the hours of 7.00 – 23.00 and Member States are recommended to develop co- and self- regulation codes with regard to advertising certain foods and drinks.
A clear focus from the Commission is the protection of vulnerable people, this can be seen by the provision in the draft which calls for stricter rules for programmes to ensure the physical, mental and moral development of minors is not impaired.
In addition, the Commission has reinforced the current provision to protect minors from unsuitable marketing communications of food high in fat, salt/sodium and sugars as well alcohol beverages.
This has in the past placed the onus on Member States to take measures, but with the continued emphasis on health and ensuring the safety of vulnerable groups, is the Commission setting up a framework to provide European rules?
Brussels should prepare to expect a stream of online service providers and kindred spirits to rally against this new proposal. Stormy audio-visual waves are ahead!
photo credits: Jonas Smith
Facebook newsroom should take a page from the accumulated experience in hundreds of years of press ethics, and a couple of decades of video games. Its first move should be to be transparent about its news algorithm and its priorities.
The tech community loves to make up laws to describe certain phenomena, such as Moore’s law which predicts growth in computer power and the perhaps more humoristic Godwin’s law which says that any online discussion long enough will end up with one comparing the other to Hitler.
But in order to understand the digital world, probably the most important of these laws would be Metcalfe’s law.
It says that the value of a network increases with the square of the number of members (or nodes), which by extension means that the price of staying outside the network increases with every new member.
This can be good news, for auctions or advert listings it’s convenient to have it all in one place. The downside is of course that it spawns very powerful dominant niche players (cue Vestager vs Google).
No business knows better how to game Metcalfe’s law than Facebook. With some 1,6 Billion users, the point where anyone realistically can opt-out has been passed long ago.
Far from the naïve place of birthday greetings and flirty pokes it may have once been, Facebook today is more like an operating system for interaction far beyond the social life: marketers use it to build hype and businesses to interact with customers, but also dictators use it to spread propaganda and terrorist organisatons to distribute beheading videos.
It cannot be easy to be Mark Zuckerberg: one day your service is believed to bring democracy to the Middle East through its sheer existence, the next you have to travel to Germany to make apologies for Nazi hate speech.
If you’re a global service, you face the problem of different rules in different jurisdictions. So far, Silicon Valley has successfully played the “safe harbour” card, saying they can’t control what users post. (If all else fails, play the algorithm card – as in “we don’t really know what it does”!).
This is not really saying “we take no responsibility” but rather a way to make their own rules. Convenient for businesses, problem is other people may disagree. And the deeper you get involved in a culture, the more difficult it gets to surf above the clouds.
These trends come together as Facebook’s power over the news becomes more evident.
Depending on what Facebook decides to show in its users’ feeds, it wields a lot of influence over the digital public sphere. The current debate about Facebook’s alleged anti-conservative bias hints at a much bigger issue.
When we ask “how can we know if the gravitation toward anti-Trump stories is a result of user preference or algorithm settings?”, we’re really asking questions such as: What rules and principles apply to Facebook’s news feed algorithm? Who is the editor in charge? Does Facebook subscribe to normal press ethics such as verifying stories with more than one source and hearing both sides of an issue?
Such basic things that are taught at every journalism school, developed over decades, centuries even, of free press. Systems of self-regulatory ethics bodies continuously evaluate and evolve these learnings, tweaking which publishing decisions are criticised and which are not.
The details of the formal systems may vary from country to country, but the principles are the same and around them is a living conversation in the professional journalist community about when to publish a story and when not to, balancing the interests of privacy (for example of crime victims) and the public’s right to information.
It is tempting to arrive at the conclusion that the internet users should be better advised and not share hoax stories and be sceptic of the sources, but that is the easy way out.
If journalists with years of education and the ethics of the professional community to draw from find these decisions difficult enough to deserve seminars, ethics committees, even specialist magazines and radio shows, how could we ever expect the average social media user to take such a responsibility?
The answer will always be that the organisation that delivers the news is responsible for the content. Mass distribution with no editorial responsibility is a recipe for disaster.
In 2012 in Gothenburg, Sweden, teenagers’ use of social media for sexual bullying and hate speech spiralled out of control and led to beatings and even street fights in what became known as the “Instagram riots”.
When The Pirate Bay posted autopsy photographs from a court case involving two children who had been murdered with a hammer, much to the horror of the Swedish public and not least the victims’ family, its spokesperson claimed the photographs were on public record and therefore could be distributed without limitation.
With normal press ethics, neither of these events would have happened. Editors would have stopped them.
When Wikileaks released diplomatic cables and military files, it exposed horrible abuse but also made public the names of local Western sympathisers, putting them at risk of vengeance from insurgents.
Edward Snowden learned from this and wisely released his leaks through established news outlets. The recent Panama papers leak is an even better example of responsible journalism, where hundreds of journalists worked together on the material before anything was made public.
But how can a service like Facebook use any of this?
It’s their users who post and share the material after all, not Facebook itself. The algorithm aside, Facebook could also learn from video games.
That’s right, many games offer both discussion forums, user generated content and in-game chat channels. As games companies try to keep a good atmosphere, avoid hate speech and sexism, as a game becomes popular it quickly becomes impossible for the game company to monitor all the content and understand all languages.
Also the normal functions such as reporting abuse and blocking users are often not enough and can themselves be abused. Instead, many game companies give to selected users moderator privileges, delegating the editorial responsibilities to trusted players. (In fact, this is the same model Google applies to its trouble-shooting forums where users help other users.)
The beauty is that it can scale infinitely, even with Billions of users. Facebook probably cannot simply copy that model, but it can use it for its newsroom service.
In traditional media, pluralism is perhaps the most important vaccine against media bias. With plenty of different publications available, there is always another view available. It is no coincidence the Soviet regime preferred to have only one news publication: Pravda (“The Truth” in Russian).
With the mechanics of Metcalfe’s law, pluralism online becomes a challenge.
As Facebook benefits particularly from that phenomenon, it has an even greater responsibility to uphold pluralism on its platform. It could start by looking at what has worked for the press and for video games.
But its first move should be to be transparent about its news algorithm and its priorities. After all, Facebook asks complete transparency of its users.
Picture credits: forzadagro
Until EU Competition Commissioner Margrethe Vestager got very serious about Google antitrust enforcement this past year, previously negligent antitrust enforcement authorities facilitated a dominant Google search and search advertising business
Wake up world, you’ve been disintermediated.
Google now essentially stands between you and most everyone and everything on the Internet.
Google’s dominant search engine + its dominant Android operating system (OS) + its world-leading Chrome web browser + its uniquely-comprehensive, Internet utility functionality of 193 products, services and tools = a virtual Google “Inner-net” regime.
Google’s Inner-net has practically assimilated most all of what the public open-source WorldWideWeb does for Internet users and much, much, more.
And it also has practically insinuated Google-controlled code into a virtual intermediary position between most everyone and most everything on the Internet.
Think of the WorldWideWeb increasingly as the public and open façade of the Web, and Google’s Inner-net as Google’s private, and more closed regime of mostly-dominant, Google-controlled operating systems, platforms, apps, protocols, and APIs that collectively govern how most of the Web operates, largely out of public view.
This makes Google your de facto global governing gatekeeper technically for most all things Internet, if you are an Internet platform, network, business, competitor, advertiser, service provider, manufacturer, content provider, app developer, stakeholder, group, or user.
In other words, one now must go through Google-controlled code somehow to virtually access most all competitive Internet information, apps, devices, software, APIs, networks, the cloud, the Internet of things, etc. — going forward.
And the Google gatekeeper’s gazillion gates tend to remain open and free for only as long as it takes for Google to become dominant in the new targeted area, and then Google tends to close those supposed ‘open’ gates with default settings that favor Google. Google’s default dominance rule-of-thumb is that ~90% of users automatically acquiesce to new Google default settings.
What an epic failure of antitrust enforcement this is.
The FTC obviously facilitated Google’s dominance in approving its acquisitions of DoubleClick and AdMob, and in suspiciously shuttering its Google search bias and Android tying investigations shortly after the 2012 Presidential election.
Remember the Clinton Administration DOJ blocked the Microsoft-Intuit acquisition in 1995 which prevented the vertical-ization of Microsoft’s OS dominance of the tech sector into other sectors, and successfully sued Microsoft in 1998 for monopolization in bundling its Internet browser into its dominant Windows operating system.
Until EU Competition Commissioner Margrethe Vestager got very serious about Google antitrust enforcement this past year, previously negligent antitrust enforcement authorities facilitated a dominant Google search and search advertising business by approving challenged acquisitions that later proved anticompetitive, and facilitating monopolization of the mobile operating system market by ignoring the anticompetitive bundling ramifications of its dominant search business being contractually tied with Android, Chrome, YouTube, Maps, Play, Gmail, etc. via Android OEM contracts.
Commissioner Vestager’s expected decision in the coming weeks that Google search is indeed dominant at >90% EU market share, and that it has abused its search dominance in preferencing its own shopping service over competitive shopping services, likely will prove to be powerful precedents for the queue of EU and private antitrust complaints lined up to build upon them.
The biggest development everyone appears to be ignoring is the huge anticompetitive implications of Google’s plans to consolidate its Chrome browser-based operating system with its Android mobile operating system this year — per WSJ reporting.
Simply, folding Chrome’s browser-based OS into Android’s App-based mobile OS would create one global, heavily-bundled, Google Inner-net operating system that could auto-default to Google’s: Search, browser, ad-tech platform, Analytics, Translate service, Gmail, YouTube video distribution, Map location services, RCS multimedia messaging services, Apps, Play store, etc.
And Google’s Inner-net operating system default search and browser could then preference or auto-suggest any or all of Google’s 193 products and services including: 23 search tools, 10 advertising services, 33 communications and publishing tools, 15 development tools, 13 map-related products, 7 operating systems, 11 desktop applications, 46 mobile applications, 27 hardware products, and 8 general services.
This pending sweeping opportunity for Google to much more broadly self-deal or preference Google platforms, apps, products, services and tools over competitors’ offerings raises the stakes on whether Commissioner Vestager’s remedy for Google’s abuse of search dominance case will require Google to abide by a firm, enforceable, and accountable, non-discrimination-principle-remedy, like the EU made Microsoft abide by for several years in its antitrust remedy for Microsoft anticompetitively favoring its Explorer browser over competitive browsers.
Ironically, Google will have an especially hard time credibly opposing a future mandate of an EU non-discrimination-principle-remedy for tying its current market-leading Chrome browser to its dominant Android mobile operating system.
That’s because in 2009, Google’s current CEO, Sundar Pichai, helped publicly lobby the EU in a Google blog post stating: “Internet Explorer is tied to Microsoft’s dominant computer operating system, giving it an unfair advantage over other browsers.Compare this to the mobile market, where Microsoft cannot tie Internet Explorer to a dominant operating system, and its browser therefore has a much lower usage.”
In sum, as critically important as the EU’s final decision is on Google search dominance and its abuse of dominance in Google shopping for online businesses dependent on search, the EU’s Android OS tying investigation is as critically important to offline businesses that Alphabet is targeting with disintermediation going forward: ISPs; automakers; manufacturers of drones, robots, wearables, and home energy/automation devices; and biotech, health care, and finance-related companies, among others.
In a nutshell, Google search and search advertising is about dominating the virtual world of the Web and online content of all kinds. One the other hand, Google’s Android operating system/browser is about also dominating the real world of all IP-enabled devices, vehicles, drones, robots, networks, sensors, cameras, microphones, wearables, implants, and the Internet of things.
Any industry in the real world targeted by Alphabet for disintermediation should ask industries in the online world what it is like to try and compete against an unaccountable, dominant, Google global-gatekeeper and biased-broker that self-deals with impunity.
Forewarned is forearmed.
Picture Credits: Danny Oosterveer
It is not only about a “movement” of sexist nerds. The Gamergate controversy, which erupted one year ago, is revealing about a much wider “dark side” of the Internet life.
Last autumn, GamerGate shocked the games industry. While it may have masqueraded as an online debate on press ethics, the actual effect was to silence female journalists and academics who publicly criticized sexist depictions of women in games.
Hundreds or thousands of anonymous web users made rape and death threats toward the handful of public women who were the targets and victims of GamerGate.
In some cases, GamerGaters allegedly also paid visits in real life. Media scholar Anita Sarkeesian cancelled a speech at Utah State University following an email threatening a mass shooting would take place if she gave it.
Game developer Brianna Wu had to flee her home after her address was posted on Twitter (alongside rape and murder threats).
This is not an isolated event, anonymous haters online – or trolls – use social media to silence the voices of those they happen to disagree with, ironically often citing freedom of speech as a justification. Sexism is just one theme, racism may be even more popular.
GamerGate started as a hashtag on the online forum 4Chan, famously connected to the Anonymous-movement – a sort of anarchist internet activists, some of whom may also be involved in GamerGate
However, even the moderators of the notoriously liberal 4Chan decided that GamerGate went too far and kicked them out. The GamerGaters regrouped at a similar but even more lax online space called 8Chan (or InfiniteChan) which has hardly any rules whatsoever.
There the actions against the likes of Sarkeesian and Wu where orchestrated, however the actual attacks were carried out mainly via Twitter using the #GamerGate-hashtag.
Anyone who says something like “sticks and stones may break my bones, but words can never hurt me” or “freedom of speech is absolute and can also be used to defend oneself against hate speech” has never been on the receiving end of something like Gamergate and has a very limited understanding of freedom of expression.
It is fair to express one’s own views, but not to try to abuse others into silence. I have met many who prefer to remain silent even on much less controversial topics such as piracy or vaccines from fear of threats or hate speech.
Anonymity has something to do with it, but lack of consequence is a more important factor. Some of the cyberbullying directed against for example Sarkeesian was not anonymous, instead attackers bragged on forums how they had hacked her Wikipedia-page or posted porn images with her head pasted.
The games industry was in shock. For many years, many parts of it had made great efforts to attract more women players and employees, as well as removing that age old stamp of sexism.
The GamerGaters claimed they had the right to define who gets to play games and particularly have opinions about games. It went against every ambition of gender equality and all the progress made in the last decade. And the game world reacted.
Sweden’s top game developers wrote an op-ed saying “not in the name of our games”. Thousands signed petitions. The mainstream media covered the story with little patience for the haters who hid in anonymity.
Companies and organisations launched equality and diversity initiatives. Processor manufacturer Intel set aside 300 Million US-dollars towards equal opportunity initiatives. Some of these activities were already on the way, some were a consequence of GamerGate.
But the most important actions may have been much humbler. Many game companies changed the rules on their forums, making consequences clearer and more strictly enforced by moderators.
In an online world without consequence, it is only too easy to post before thinking, more often than not exaggerating to impress other users.
The tone on many game forums may certainly have contributed to the Gamergate attitudes. But the game forums are also part of the solution.
Other social media could learn from how active moderation and clear rules can form a climate where respect and freedom of speech prevail over hate and bullying. The game world learned it the hard way.
photo credit: PJ Rey
In light of the sharp increase in video content and online entertainment the audiovisual market has changed dramatically over the past years, posing new complex challenges for European policy makers and regulators. The traditional approach based on distinctive markets seems to be inadequate to encompass this new competitive environment.
The market trend
The growth of internet-based video is mainly driven by two factors. The first one is the broadband deployment that allows video to be smoothly transmitted and widely distributed.
The video, which represents already more than 50% of traffic on fixed networks in Western Europe, is expected to increase exponentially on mobile networks growing almost 20 times between 2011 and 2016, at an average annual rate of 80%.
The second factor is the increase in demand for quality services which provides a strong incentive in producing HD and Ultra HD (4K) contents, which are bandwidth hungry.
Consequently, the market scenario exhibits the following features: viewers ask for more and more video quality content, manufacturers want to distribute such content on as many platforms as possible and the rights holders are finally conscious of the opportunities of the broadband delivery, also for valuable contents, once they are properly rewarded.
Accordingly, new players enter the market and provide VOD services, implementing various business models: “advertising video on demand” (AVOD), which includes services such as Hulu or Dailymotion; “subscription video on demand” (SVOD), which includes services such as Netflix and Amazon Prime; “transactional video on demand” (TVOD), which has a “pay as you go” pricing scheme (iTunes); and Freemium VOD model, which allows all users to have a limited free tier, and pay service offerings on higher tiers (Hulu Plus).
All this will require more and more performing networks, able to deliver such contents on many platforms, and leaves open questions on issues related to traffic management and quality of services, which are a primary component of the current debate on net neutrality.
In the US, the first two video entertainment services occupy 50 % of the bandwidth in peak time, and Netflix alone exceeds 30%.
For this reason, in the US SVOD providers are expected to benefit even more from this migration. Such services have indeed gained in popularity at the expenses of pay TV, whose subscribers have declined over the past two years. Consumers are abandoning traditional pay TV subscriptions, subject to cord-cutting, while SVOD services try to impose themselves as premium channels, with original and exclusive content.
In Europe, the industry witnesses a more complex competitive structure. The global market leader Netflix is trying to extend its dominance launching services in 13 countries in the last 3 years. At the end of 2014 Netflix reached 14 million subscribers, with a higher penetration in UK and North of Europe, the countries where it initially started. Other international OTT services like Amazon Prime are also popular in some of the main EU countries (UK and Germany).
Telcos and broadcasters also invest in this market. Telcos, especially in countries where IPTV is more developed (France above all), have begun to extend their OTT VOD offer. For example, SFR (FR) launched a VOD service for mobile and tablet, Deutsch Telekom (DE) the multiscreen service EntertainToGo, KPN (NL) a multiscreen TV service on its own 4G network.
The broadcasters, especially after the economic crisis and the saturation of their core business, are now investing in this sector and pay-TV operators in particular have all their established VOD service, ready to compete with that of OTT players.
All this considered, in Western Europe, according to ITMedia Consulting last report “Video on Demand in Europe: 2015-2018“, total revenues coming from VOD services are expected to reach €3.58 billion at the end of 2018, from €2.14 billion estimated at the end of 2015, with an average annual growth of 22%.
Above all, the revenues of the SVOD model are the one expected to grow most: in 2018, SVOD will count more than 50% of the total VOD revenues, with an average annual growth of 34%, thanks also to the expected Netfilx’s expansion in the rest of Europe (e.g. in Italy and Spain from October 2015) and the increasing competition coming from payTv broadcasters.
As shown above, the audiovisual market has thus changed dramatically over the past years, posing new complex challenges for policy makers and authorities.
First of all, the trade-off between more competition or more concentration has to be taken into account. If, on the one hand, the existence of network effects, economies of scale and sunk costs (programming, rights) increases barriers to entry against “native” internet operators; on the other hand this seems no longer sufficient to ensure competitive advantages to former analogue incumbents.
This trade-off refers in particular to the ability of established industries, such as telcos and media, to keep their acquired positions based on natural monopolies or oligopolies.
The question here is whether the evolution itself driven by the internet economy generates a greater level of competition, providing the maximum efficiency of the market in terms of consumer welfare, or if it is only a transfer of revenues and market power from the former incumbents to the succeeding ones.
All suggest, however, that the traditional approach based on distinctive markets seems to be inadequate to encompass this new competitive environment.
As a consequence, the traditional antitrust market definition should be reviewed in the light of the great changes of the competitive framework, starting from the distinction between free TV and pay TV markets.
In this view, economic theory on two-sided markets may be useful to explain how platforms interact simultaneously with different groups of agents, exploiting the cross-group externalities and thereby correlating previously separated markets.
As of ex-ante interventions, the quest for balance between the highly regulated TV (and audiovisual) sector and the internet is highly debated. A possible solution might be to adopt symmetric regulation between the two (level playing field).
On the other hand this proposition is not easy to pursue. For example, must carry/ must offer obligations, pluralism requirements, quotas on audiovisual EU and national production and so on may be hard to extend to the new environment.
A “light touch” regulation instead might be more effective in order not to lose the high levels of innovation that has fostered competition in the internet economy, thereby enhancing consumer welfare.
In addition, the geographic market definition of audio visual services is typically at the national level. However such boundaries between nations might soon be questioned by the global nature of the Internet players.
In this view the market definition, which is at the moment different between regulation and competition regime (see figure 5), should hopefully be harmonized in order to be coherent and avoid conflicting decisions in the new sector.
In conclusion, in this changing environment, regulators and antitrust authorities are likely to play a substantial role. Their determination, through ex ante and ex post interventions, will have a great influence on the sector development and will significantly affect in Europe the speed of the transition process to the convergence.
There is concern among those who produce professional content that their role is being marginalised in the debate around the Digital Single Market, which is too often over-simplified into a matter of balancing interests of “users” and “creators”.
The task of sorting the wheat from the chaff, of finding the new J K Rowling among the thousands of self-published wannabes – maybe needles and haystacks would be a better analogy – falls, in professional media production, to the producer or publisher.
For the purposes of this article I’ll use producer as a generic term encompassing editors and publishers, as well as those who, in the music, film and TV business, will have ‘producer’ as their job description.
At the moment, there is increasing concern among those who produce professional content that their role and their interests are being marginalised in the heated discussions in Brussels around the Digital Single Market, or reduced to derogatory comments about “intermediaries”.
This is not the fault of the European Commission. It is the Commission’s job to question whether legislation is needed to ensure that the success of the single market can be replicated in today’s online, connected world.
And if the purported GDP benefits of the DSM ever become tangible then clearly this will have been a great EU success story. But when discussion turns to the DSM’s impact on the content sector, this is too often, maybe as a function of today’s tweet-driven politics, over-simplified into a discussion about balancing the interests of “users” and “creators”.
These are important stakeholders but reduction of the digital content business to a two-sided debate between users and creators ignores the whole series of business processes which work the original creative idea into something the user will wish to pay money to watch, read or listen to.
There is a risk that the discussion may overlook processes like paying large advance for a book which has just been pitched by the author and which might (but might not) cover its costs; or selecting and marketing those ideas which have the potential to succeed – it has long been a rule of thumb in the music business that the record company will lose money on nine out of ten releases; or getting the distribution strategy right.
Here, the trick is to use the internet as one platform among many so as to maximise distribution. For example, the political drama Borgen, produced by Danish public television but with financial contribution to its production and development from other broadcasters, is now a success in many European countries.
But this is thanks not just to the quality of the programme itself but also its marketing and distribution: by selling the national rights to broadcasters like the BBC or VRT in Belgium, the rightsholders knew that their show would be scheduled and promoted in such a way as to attract the interest of an audience which has high expectations of the “subtitled drama” slots on those channels.
An internet-driven distribution strategy – streaming the content, presumably with subtitles, and relying on word of mouth, social media or recommendation engines to bring an audience to the frankly rather arcane subject of Danish coalition politics – would not have worked.
The internet has had a profound and positive impact on the content businesses by easing distribution of both professional and amateur content. It is a great opportunity for the commercial content business.
Digitisation has also encouraged the production of content, sometimes blurring the line between amateur and professional. For instance, the group of citizen journalists – basically activists with cameraphones – from the Kiev Maidan are now running a respected online TV channel, Hromadske TV, or there are many examples of You Tube stars and bloggers establishing viable businesses and solid fanbases among their particular niche audience.
But although the Internet is an exciting distribution platform, it does not finance content production, and for all the digital enthusiasm one hears in the Brussels bubble these days, the fabled ‘disruptive’ effect of the internet has yet to affect the content sector in two ways.
First, to date, few online content ventures have broken through to, and changed, the mass market in the way that the “Tin Drum” or “Anarchy in the UK” did.
The shifts have happened in distribution rather than production of content. Secondly, and more importantly, those advocating disruption as an end in itself have yet to come up with a new model of ensuring that the people who make the content get paid.
The producer and publisher, like any other business, seek to ensure recoupment of investment, maximise audience and viewership – which translate into revenues that flows back to the creator or creators.
The model works offline and is increasingly working online, although there are issues around revenue sharing between platforms and the creative sector. And it’s in Europe’s interest with the creative industries accounting for €509 bn of EU GDP and seven million jobs.
If the current system is to be challenged – and to date we have seen no evidence of a viable alternative – then the European content producers need to be at the heart of the discussion.
photo credit: Zé Pedro
The EU does not have the match of national media of its own and has very limited traditional tools of direct interaction with the citizens of the 28 member states. In the age of mass broadcast and print media that was a huge disadvantage. In the age of new media and digital democracy this might turn into an opportunity.
Around the turn of the century when virtually everyone, at least in the developed world, was already “connected” traditional media started rapidly losing clout and income as both readers and advertisers “migrated” online for a faster, interactive and more diverse information environment.
Ever since newspapers have been struggling, with many of them either becoming extinct or turning into newsprint extras to their web-sites.
TV stations have suffered heavily from the free availability of news and entertainment video content online and Radio is more and more confined to the lazier alternative of listening to your own music while driving to work.
With a few years delay, the same decline in status befell upon traditional political parties in Europe – and for the same reasons exactly.
Traditional media were initially unable to react to the interactive requirements of their connected audiences just as much as traditional parties were unable to react to the expectations of citizens. People declined to be treated as voters and taxpayers any longer and wanted to be, well, decision-makers themselves.
The mass invasion of social media, which is yet to celebrate its tenth anniversary, put an end to the traditional European societies, where public opinion was the result of a complicated debate among citizens with different views moderated (many would say manipulated) by the political establishment and the mainstream media.
Social media made that debate impossible as people befriended, “liked” and “followed” only the like-minded and completely ignored everyone else, with painful political ramifications. The traditional political parties of Europe could no longer bet on leadership and consensus-building: the “new media” environment does not tolerate consensus and is virtually leaderless.
It’s very tempting to claim that digital media and social disintegration have helped populists and fringe radicals become the heroes of the day, but that would be missing the point by miles.
The ever-growing speed of connectivity online has only highlighted the decay of social connectivity “off-line” and the obsolete mechanisms of traditional political decision making and has led to the rapid growth of interactive political tools named with a variety of buzzwords like eParticipation, Digital Democracy, Crowdsource Legislation and the like.
Citizens have started shaping policies directly in cities across Europe, with France, Estonia, Finland and Iceland leading the way from various forms of on-line consultations with citizens to actual co-legislating and interactive policy implementation. UK’s legislature is the latest in discussing a strategy to make Parliament “fully interactive and digital” by 2020.
The case with the European Union is quite different. It’s pointless to argue how far the EU-wide equivalent of “public opinion” has developed; European Citizenship rests widely on the irresponsibly challenged right of free movement; and the European Parliament, the only EU body directly elected by EU citizens is so far failing in its efforts to increase voter interest and turnout.
In addition, the EU does not have the match of national media of its own and has very limited traditional tools of direct interaction with the citizens of the 28 member states.
In the age of mass broadcast and print media that was a huge disadvantage. In the age of new media and digital democracy this might turn into an opportunity.
The poor links between the EU institutions and the European citizens have been at the core of the debate about its democratic credentials. Now that the tools to boost connectivity are at hand, the EU doesn’t need national governments or media to spread its message and can communicate with the EU citizens directly.
The biggest challenge here is not to limit this communication to social media marketing, on-line consultations or participation in drafting policy.
Much like on the national level, citizens are not interested in debating unless they can feel the impact they’ve made on issues they care about.
Technology and communication will not solve any of the functional problems of the EU and Digital Democracy should not be seen as the “ultimate driving machine” of European success, but actively engaging citizens in decision making might well be the driver of much needed European reforms.
photo credits: Niooru
The whole debate about “suppressing borders” to online film viewing will only have any possibility of success if it is combined with a structural support to an evolution of the current chain of value and the whole European film industry source of income.
February is an essential month in the movie industry calendar. For a few days, the Martin-Gropius-Bau, an elegant XIXth century building which survived Berlin’s historical dramas, becomes the most important film marketplace in the world.
At the European Film Market, which runs parallel to the Berlinale, hundreds of films from all over the world are sold to film distributors, also from all over the world. In the market corridors, or in the large bars of international hotels, tens of agreements will turn film projects into a viable reality.
Indeed, Europe is here the main player both on the selling and the buying side, but not the only one. And what is sold here? Well, leaving aside co-production deals, this is essentially a market of distribution rights within a particular territory.
Film sales agents, authorized by the films’ rights holders contact distributors, and do what humans have been doing in markets for many centuries.
Films we have never heard of; films which are only known, if at all, in their country of origin, or which are already a hit in the domestic box office; films which may not be fully finished or which are little more than a script and a production plan; titles of all sort of budgets and genres are sold to distribution companies on a national basis, for these companies to make them available to theatres; or to include them in an online catalogue, or… : it would be long to describe here all the possible deals and formats these agreements can take.
What is important is that, as a result of those deals, as in any business, someone will be putting money at risk betting on the success of a movie; someone will start to recover part of an investment thanks to a good sale; someone will obtain the final amount allowing the film to become reality: “pre-sales” are in many cases an way of financing the film itself.
Once the market is over, distributors from small, midsize or large companies will return home with some titles in their bags and the rights for their theatrical and/or online distribution (and even other options nowadays) within a particular country.
Once back, they will spend time and money, in the form of advertisement targeted to the particular audience and in the language of the country where the film is to be released.
Many months or a couple of years later, leaving aside piracy, some of those movies will fall into total oblivion. But others that started their commercial life in Berlin may have won some awards here and there, or may have been very successful at the box-office.
Then, viewer’s demand for them will grow; people will look for those titles online… only to discover that the film is not available for viewing in that particular country.
Geo-blocking, that is the word. Online catalogues are territorial, even within the EU, and what is perhaps already available in one member state is blocked for you as soon as the platform’s software discovers that your IP belongs to the other side of the border.
What? Outrageous‼ Wasn’t the EU supposed to be a single market? Is that only true for the offline world? This is a truly anti-European practice! Well, wait a minute. This is not the result of an evil plan against consumers.
This is just the natural consequence of those deals which started at the ground floor of the Martin-Gropius-Bau, or any other film markets in Europe and abroad. It is just the result of a complex business model which sustains the very existence of that film you want to watch.
If someone paid for the film rights in Belgium, that company naturally expects to recover that investment in the Belgian box-office or through a Belgian web platform.
And that would be complicated if the Belgian audience can watch online the film from the online distribution made possible by someone who purchased the online film’s rights for Austria or for Ireland. It could even be possible that a movie is already online in Ireland, before even having been released in theatres in Antwerp or Brussels.
The European Commission wants to change this state of things. Commissioner Oettinger travelled to Berlin on February 9 to proclaim again that message before an audience of 700 film professionals.
It was his first direct contact with the film industry in his political career:
“I want more choice for consumers. They should also benefit from the advantages of digitalization and be able to shop for more films across-borders”.
This is the mantra constantly repeated by EU Officials, even by Junker himself. As they sometimes make it sound, their ideal world is a European digital single market where consumers can watch what they want when they want from any country. It sounds so nice. But who will be paying for that? To whom? How?
Too often those same officials forget to say that it is also the Commission’s responsibility to ensure in such an idealistic scenario, viewers can keep watching European content. That is also their obligation, both political and legal obligation according to the Treaties.
A similar consideration can be applied to many Members of the European Parliament (although MEPs are certainly free to have an anti-European political agenda or one that attacks European interests if they wish so).
That means that the whole debate about “supressing borders” to online film viewing will only have any possibility of success if it is combined with a structural support to an evolution of the current chain of value and the whole European film industry source of income.
This is not about protecting old business models per se: everybody and everything must be adapted to the online world and to new habits of consumption. The current “media chronology”, for example, which sets the mandatory timing for movies consecutive windows from the theatrical release to laptop downloading or TV broadcasting, must be reviewed.
It is definitely too rigid. And so can be reviewed other issues, as it is the case for the situation of films which are just not available at all in one country as the demand is too small there, but are fully available somewhere else in Europe.
Those and other aspects will need to change, and the industry knows that. But who has the capacity to buy the distribution rights of a film for the territories of 28 Member states at the same time? Who can manage and care about those theatrical releases of one title from Palermo to Gdansk, dubbed or subtitled in Polish, Italian and all the other languages?
Can that be done with one single uniform marketing campaign? And can it be done simultaneously? The replies to those questions easily lead to the names of a few non European companies, and to the film titles those companies would be ready to invest in.
In other words: for too many people it is Europe’s cultural diversity that can be at risk here, if the current scheme of contracts and investments and payments, which keeps the industry alive, is just killed through the EU’s Official Journal before the European film industry has been transformed and alternative ways of monetising film production and film distribution have been put in place.
Innovation can bring – it is bringing already – new opportunities to those who risked their money for a beautiful film to exist in the first place. It is so interesting that almost at the very moment that Commissioner Oettinger was talking at the first floor of the Ritz Carlton hotel in Berlin, Netflix made the announcement that it is opening its service in Cuba, and promised to include a large amount of Cuban movies in its U.S. catalogue (and when possible in other countries).
This will not reach a wide audience in Cuba for now (according to the International Telecommunications Union the country had 5,360 fixed broadband subscribers in 2013 out of a population of about 11.3 million), but the symbol is there.
In approximately three years, an audience of tens of millions of viewers, in the US and abroad (and a few Cubans among them), will have access via Netflix to some of the best European films resulting from deals closed in Berlin in February 2015.
The power of connectivity is transforming existing economic, political, and social structures. In a word, the Internet is disrupting established systems. The resulting uncertainty is as much a risk as an opportunity.We are on the hedge of a new frontier.
In December 2003, Sir Arthur C. Clarke noted that “satellite television, Internet, mobile phones, email – all these are technological responses to a deep-rooted human desire to communicate and access information. Having achieved unprecedented progress in the field of communications during the past half century, we now have to pause to think of social, cultural and intellectual implications of what we have created.”
As economies and societies are increasingly becoming data-driven, interactions between individuals, the groups they belong to and the institutions that govern them are evolving dramatically.
The transformational power of connectivity is immense and resonates far beyond technology itself, as the Internet is changing existing dynamics of economic, social and political construct.
Preconceptions of accountability, transparency, privacy, and even democracy are being reconsidered. Political systems, the fabrics of social contracts and the nation state are being challenged by an ever growing desire to know, share and control.
As a result, the enabling power of the Internet is blurring physical borders between countries and peoples, between governments and citizens, between businesses and consumers; what used to create wealth, welfare, influence and power is no longer certain. And this uncertainty is as much a risk as an opportunity.
These evolutions are fuelled by new technologies that disrupt established systems. In this regard, the Internet is no more different than the printing press, the telephone, the light bulb, the locomotive or the airship. These inventions not only served the technical purpose that their creators intended them to have, but also revolutionized systems all together.
The telephone, for instance, provided a new technical way of communicating, but it also generated new rules of etiquette – a new societal way of behaving and communicating. The light bulb transformed factories, cities and homes; it changed the way people live, work and interact. Hence the power of technology lies not only in its mechanics, but also in its capacity to transform the environment it is used in.
[Tweet “The digital revolution is transforming the way information is generated, collected, shared”]
– whether it is between individuals, between businesses, between machines, or between citizens and governments. Information is participation. And participation leads to contribution.
In December 2013, Jason Pontin, Editor of the MIT Technology Review, argued that these new technologies “don’t solve humanity’s big problems.” Indeed, they don’t. But perhaps their purpose is less in solving problems for now, than in creating opportunities for collective contribution.
Ultimately, throughout the world, the big question is one of control: control of information, control of its flow, and protection of that control. As Sir Arthur cautioned, “it is vital to remember that information – in the sense of raw data – is not knowledge; that knowledge is not wisdom; and that wisdom is not foresight. But information is the first essential step to all of these.” In essence, the Information Age is much more than a big problem to be solved; it is humanity’s New Frontier.