• Data Economy

    Ever heard about Hadoop? You should

    Although the word Hadoop may not ring a bell to you, it is probably one of the main reasons why you have heard – and continuously hear – buzzwords such as “Big Data” and “Internet of Things”. On April 15-16, The Digital Post had the pleasure [read more]
    byVasia Kalavri | 04/May/20156 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    Although the word Hadoop may not ring a bell to you, it is probably one of the main reasons why you have heard – and continuously hear – buzzwords such as “Big Data” and “Internet of Things”.

    On April 15-16, The Digital Post had the pleasure to attend the Hadoop Summit 2015 Europe, which took place in Brussels, at the SQUARE meeting center. Here are our impressions from the event.

     

    What is Hadoop?

    Hadoop is an open-source platform for distributed storage and processing of very large amounts of data.

    You might have never heard of it, but Hadoop is probably one of the main reasons why you have heard – and continuously hear – buzzwords such as “Big Data” and “Internet of Things”.

    In fact, Hadoop has become the main standard for big data processing,

    Data volumes are increasing. Ninety percent of the world’s data was created over the last two years, says a research from IBM. Essentially, Hadoop accomplishes two tasks: massive data storage and faster processing.

    On the one hand it enables to store bigger files than what can be stored on one particular node or server. On the other hand, it has the ability to process huge amounts of data, or at least (provide) a framework for processing that data.

    Even though Hadoop is nearly 10 years old, it has only recently started becoming popular in industry and it’s still quite far from being mainstream. However, the interest in Hadoop-related technologies is continuously increasing, Hadoop-related talent is trending, while highly skilful people in this area are highly valued and hard to find.

     

    The conference

    The Hadoop Summit is one of the biggest industrial conferences focusing on Apache Hadoop and similar big data technologies, organized by Hortonworks. Hadoop community members, users, programmers, industrial partners and researchers participate in this 2-day event to share their experiences and knowledge, seek Hadoop talent to hire, promote their products and do a lot of networking.

    The summit takes place both in North America and Europe. In fact, Hortonworks reported having about 15-20 percent of its business and employees in Europe.

    The event kicked off on Monday April 13 with several pre-conference events, such as trainings and affiliated meetups, followed by the main 2-day conference on April 15-16.

    Each main conference day started with quite lengthy keynotes, followed by talks in 6 parallel sessions, covering the following topics:

    – Committer track: technical presentations made by Apache Hadoop and related projects committers

    – Data science and Hadoop

    – Hadoop Governance, Security & Operations

    – Hadoop Access Engines

    – Applications of Hadoop and the Data Driven Business

    – The Future of Apache Hadoop

     

    We analysed the talk titles (abstracts) to find the most popular topics and not surprisingly, these mostly included the words Hadoop, Data, Apache and Analytics:

    HadoopConTags

    According to the organizers, there were 351 submissions from 163 organizations. Yet the diversity and variety in the agenda was rather limited. Almost 1 out 5 speakers came from Hortonworks (the organizers), while there were only 5 women among the speakers. That’s an extremely disappointing percentage of around 5%. The attendee count was also impressive, 1300, as reported by the organizers, but equivalently impressive was the lack of women among them.

     

    HadoopConGraph

    What got people talking, however, was the immense amounts of data reported by some of the participating companies: among others, Yahoo reported 600PetaBytes of data, a 43000 servers-cluster and 1 million Hadoop jobs per day, Pinterest talked about 40PetaBytes of data on Amazon S3 and a 2000-nodes Hadoop cluster, while Spotify reported having 13PetaBytes of data stored in Hadoop.

    As anticipated, the keynotes were very much focused on proving the business value of Hadoop and aiming at promoting products and services. Apart from some awkward role playing and the sales pitches, there was little technical value in most them, with the Yahoo keynote being an important exception.

    Streaming and real-time processing were certainly two of the hottest topics in the summit, covered by several talks each day. Real-time processing, i.e. processing data the moment it reaches your system and being able to immediately make decisions based on occurring events, is, without question, the next –if not current– big thing. I hope, though, that speakers get a bit more creative with their use-cases: out of the 5 streaming talks I attended, 4 used the “anomaly detection” use-case as their walk through example and motivation.

    The party place and theme perfectly fit the male-dominated audience. Held in an automotive museum, the famous Brussels Autoworld, a certainly spectacular place if you like cars… and motorbikes…and even more cars. Cars and motorbikes aside, we really enjoyed the food, even though vegetarian and vegan attendees felt probably neglected.

    Hadoop Summit was a successful event, judging by its numbers, technical content and overall organization. We look forward to seeing how the organizers will try to build on this success by improving, during the next events, speaker and attendee diversity and inclusivity.

     

    photo credits: Alex
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    Cognitive Computing: Benefits and Challenges of the Next Tech Revolution

    Cognitive Computing is going to transform and improve our lives. But it also presents challenges that we need to be conscious of in order to make the best use of this technology. Co-authored by: Mateo Valero and Jordi Torres Big Data technology allows [read more]
    byBarcelonaTech UPC | 13/Apr/20157 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    Cognitive Computing is going to transform and improve our lives. But it also presents challenges that we need to be conscious of in order to make the best use of this technology.

    Co-authored by: Mateo Valero and Jordi Torres

    Big Data technology allows companies to gain the edge over their business competitors and, in many ways, to increase customer benefits. For customers, the influences of big data are far reaching, but the technology is often so subtle that consumers have no idea that big data is actually helping make their lives easier.

    For instance, in the online shopping arena, Amazon’s recommendation engine uses big data and its database of around 250 million customers to suggest products by looking at previous purchases, what other people looking at similar things have purchased, and other variables.

    They are also developing a new technology which predicts what items you might want based on the factors mentioned above and sends it to your nearest delivery hub, meaning faster deliveries for us.

    To do so, they are using predictive models, a collection of mathematical and programming techniques used to determine the probability of future events, analyzing historic and current data to create a model to predict future outcomes.

    Today, predictive models form the basis of many of the things that we do online: search engines, computer translation, voice recognition systems, etc. Thanks to the advent of Big Data these models can be improved, or “trained”, by exposing them to large data sets that were previously unavailable.

    And it is for this reason that we are now at a turning point in the history of computing. Throughout its short history, computing has undergone a number of profound changes with different computing waves.

    In its first wave, computing made numbers computable.

    The second wave has made text and rich media computable and digitally accessible. Nowadays, we are experiencing the next wave that will also make context computable with systems that embed predictive capabilities, providing the right functionality and content at the right time, for the right application, by continuously learning about them and predicting what they will need.

    For example identify and extract context features such as hour, location, task, history or profile to present an information set that is appropriate for a person at a specific time and place.

    The general idea is that instead of instructing a computer what to do, we are going to simply throw data at the problem and tell the computer to figure it out itself.

    We changed the nature of the problem from one in which we tried to explain to the computer how to drive, to one in which we say, “Here’s a lot of data, figure out how to drive yourself”. For this purpose the computer software takes functions from the brain like: inference, prediction, correlation, abstraction, … giving to the systems to possibility to do this by themselves. And here it comes the use of cognitive word to describe this new computing.

    These reasoning capabilities, data complexity, and time to value expectations are driving the need for a new class of supercomputer systems such as those investigated in our research group in Barcelona.

    It is required a continuous development of supercomputing systems enabling the convergence of advanced analytic algorithms and big data technologies driving new insights based on the massive amounts of available data. We will use the term “Cognitive Computing” (others use Smart Computing, Intelligent Computing, etc.) to label this new type of computing research.

    We can find different examples of the strides made by cognitive computing in industry. The accuracy of Google’s voice recognition technology, for instance, improved from 84 percent in 2012 to 98 percent less than two years later. DeepFace technology from Facebook can now recognize faces with 97 percent accuracy.

    IBM was able to double the precision of Watson’s answers in the few years leading up to its famous victory in the quiz show Jeopardy. This is a very active scenario.

    From 2011 through to May 2014, over 100 companies in the area merged or were acquired. During this same period, over $2 billion dollars in venture capital funds have been given to companies building cognitive computing products and services.

    Cognitive Computing will improve our lives. Healthcare organizations are using predictive modeling to assist diagnosing patients and identifying risks associated with care. Or farmers are using predictive modeling to manage and protect crops from planting through harvest.

    But there are problems that we need to be conscious of. The first, is the idea that we may be controlled by algorithms that are likely to predict what we are about to do.

    Privacy was the central challenge in the second wave era. In the next wave of Cognitive Computing, the challenge will be safeguarding free will. After Snowden revelations we realize it’s easy to abuse access to data.

    There is another problem. Cognitive Computing is going to challenge white collar, professional knowledge work in the 21st century in the same way that factory automation and the assembly line challenged blue collar labor in the 20th century.

    For instance, one of the Narrative Science’s co-founder estimates that 90 percent of news could be algorithmically generated by the mid-2020s, much of it without human intervention. Or researchers at Oxford published a study estimating that 47 percent of total US employment is “at risk” due to the automation of cognitive tasks.

    Cognitive Computing is going to transform how we live, how we work and how we think, and that’s why Cognitive Computing will be a big deal. Cognitive computing is a powerful tool, but a tool nevertheless – and the humans wielding the tool must decide how to best use it.

     

    photo credits: Robert Course-Baker
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Digital Single Market

    Digital Single Market: A paradigm shift

    European efforts to create a Digital Single Market mean much more than the usual work on finalisation of legislation. A comprehensive approach is needed to understand and unleash the benefits of a truly connected continent. The key question today is to [read more]
    byMichal Boni | 23/Feb/20157 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    European efforts to create a Digital Single Market mean much more than the usual work on finalisation of legislation. A comprehensive approach is needed to understand and unleash the benefits of a truly connected continent.

    The key question today is to understand the scale and the real impact of the digital revolution. The internet is a general purpose technology, as few inventions in the history have been and like those of Gutenberg or Bell, has completely changed the world.

    We need to use an holistic approach to understand the different aspects of the forthcoming digital challenges, so that we can build tools that will allow us to fully exploit and benefit from new technologies.

    The impact of the creation of the European single market has been crucial on many areas. The process of building the single market itself has provided already valuable lessons to the European economy. Notably the resulting common set of rules that simplified the legal framework for business and consumers alike.

    [Tweet “The idea of the digital single market works as a multiplier”]

    The digital single market adds value through the digital drivers that are present in all sectors of the economy.

    Today, we are not only seeing a massive development of the ICT sector – we are also experiencing the spread of “ICT development” in all sectors and all branches of the economy. The benefit and value this spread brings, is a result of the growing number of digital factors found in the economy today.

    Those digital factors create new opportunities.

    First, the contribution of a new phenomenon: data processing that fuels the data driven economy. This changes the way we manufacture, increases the productivity, provides new ways of allocation of resources, improves energy and transport efficiency, supports the smart cities development and lays the basis of a future of internet of things.

    The result is that we are confronted with a completely new economic reality. It extends from the “connected car sector” delivering various types of content – via the neutral platforms model, (defined as collection of goods and services provided in a fair way) – to the “app economy”, enabling everyday decisions of users while influencing their environment and the economic landscape.

    This new reality is leading us to a new model of products and services; consumer oriented, aiming to meet our expectations, our needs. So the phenomenon of the personalisation emerges. Everything can be personalised, tailor made to our individual taste. But very often the trade off is disclosing the knowledge of our habits, our needs and eventually giving up privacy.

    Therefore we are faced with the conflicting realities this new economic landscape presents. On one hand, disclosures that contribute towards the profiling of individual behavior as the means to monetise privacy and on the other our will to defend “the digital I”.

    This is the reason why we need to have a data and privacy protection regulation in place, where trust is fundamental for the new digital economy.

    Only by ensuring more trust we can build the right framework for the growth of new kinds of public services: the m-health in healthcare; new forms of teaching and dissemination of MOOCs influence in education; access to open data and public knowledge; and as far as the cultural activities are concerned, access to more opportunities thanks to the new European copyright rules, related to the authors and clearer definitions of the public domains.

    Hence is obvious, that Big Data development, personalisation of products and services, data protection and digital security are relying on internet access and the framework for global connectivity.

    The connectivity is the other great opportunity and challenge as it needs to serve the previously mentioned services and the ever growing number of connected devices. It is evident the need for investments in the area of infrastructure tailored to these future requirements of a digital, fully connected economy.

    This translates into considering systemic, structural incentives for European operators to complete their work on 4G networks while beginning the preparation for next generation 5G infrastructure.

    The internet is changing all aspects of our life: from the economic to the social and private. It redefines our position as consumers, as producers, as workers and as citizens.

    [Tweet “The internet represents today a new and important opportunity to rethink our democracy”]

    A new concept of citizenship is taking shape, opening new possibilities for participatory democracy, by enabling online consultations, rendering the decision making processes more transparent and making the citizens a valuable source of knowledge for the public authorities.

    This new model of shared democracy is a good foundation for the shared economy. Moving forwards towards its implementation, by developing further the internet, its governance and its infrastructure, we should not forget the dark aspects it also brings.

    As the virtual aspects, values and principles of the internet are crossing over to the real world, we should highlight the interconnections that these form related to: the clear rule of law, the respect for fundamental rights, the transparent tools for law enforcement.

    The digital era is challenging us. The new is the added value of the paradigm shift.

    We have a unique opportunity to move from open, creative minds to open societies and via the open governments to an open, much more collaborative economy with new competitive advantages.

    But it requires the awareness that European efforts to create the Digital Single Market mean now significantly more than the usual work on finalisation of legislation. “More” means an holistic approach that will allow this digital package to provide comprehensive results and benefits.

    The way for the paradigm shift is: from a new technology to a new socio-economic model of development.

     

    photo credits: Dan Foy
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    Making Europe fit for the ‘Big Data’ economy

    The European Commission has taken an important first step in outlining possible elements of an EU action plan on Big Data. It is now essential to get the policy framework right. The faster the better. A second wave of digital transformation is coming. [read more]
    byJohn Higgins | 12/Jan/20155 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    The European Commission has taken an important first step in outlining possible elements of an EU action plan on Big Data. It is now essential to get the policy framework right. The faster the better.

    A second wave of digital transformation is coming.

    The first one revolutionized the way we order information and spans technological advances from the advent of the mainframe computer to the arrival of Internet search.

    [Tweet “This second wave will reinvent how we make things and solve problems.”]

    Broadly it can be summed up in two words: Big Data. The expression ‘Big Data’ is used to describe the ability to collect very large quantities of data from a growing number of devices connected through the Internet.

    Thanks to vast storage capacity and easy access to supercomputing power – both often provided in the cloud – and rapid progress in analytical capabilities, massive datasets can be stored, combined and analysed. In the next five years Big Data will help make breakthroughs in medical research in the fight against terminal illnesses. Per capita energy consumption will decline sharply thanks to smart metering another application of Big Data.

    Traffic jams will be rarer, managing extreme weather conditions will become more science, less guesswork. Makers of consumer goods of all kinds will be able to reduce waste by tailoring production to actual demand. This new ‘data economy’ will be fertile ground that will allow many new European SMEs to flourish.

    Broad adoption of such Big Data applications can only happen if the data is allowed to flow freely, and if it can be gathered, shared and analysed by trusted sources. Size definitely does matter. The bigger the dataset, the more insights we can glean from it, so it’s important that the data can flow as widely as possible.

    [Tweet “Some elements of Big Data might involve personal data.”]

    People need to be confident these are protected by laws and agreements (such as safe harbour). All actors in the data economy must work hard to ensure that data is as secure as possible against theft and fraud.

    The European Commission has taken an important first step in outlining possible elements of an EU action plan for advancing towards the data-driven economy and addressing Europe’s future societal challenges.

    To complement this initiative DIGITALEUROPE has drafted a paper outlining what we see as the policy focus in relation to Big Data.

    We have identified eight priorities:

    – Adopt a harmonised, risk-based and modern EU framework for personal data protection that creates trust while at the same time enabling societally beneficial innovations in the data economy

    – Encourage the protection of Big Data applications from cyber attacks, focusing regulatory efforts on truly critical infrastructures

    – Support the development of global, voluntary, market-driven and technology-neutral standards to ensure interoperability of datasets

    – Clarify the application of EU copyright rules so to facilitate text and data mining

    – Boost the deployment of Open Data by transposing the Public Sector Information Directive into national law by June 2015 at the latest (EU Member States)

    – Create trust in cross-border data flows by supporting the implementation of the Trusted Cloud Europe recommendations

    – Continue addressing the data skills gap by supporting initiatives like the Grand Coalition for Digital Jobs

    – Continue encouraging private investment in broadband infrastructure and HPC technologies with public funding DIGITALEUROPE is ready to engage constructively with the European Commission, Parliament and Council to help them formulate a European action plan for the data economy

    It is essential to get this policy framework right, but it is also important to move fast. While Europe is preparing the ground for widespread adoption of the new digital age, the rest of the world is not standing still.

     

    photo credit: data.path Ryoji.Ikeda
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    IoT: The great opportunity for Telcos

    By 2020, we are expected to have 50 billion connected devices. Will European Telecoms firms monetize the explosive growth of Internet of Things? The next five years will be critical. In the long term, much may depend on the development of 5G technology. [read more]
    byAjit Jaokar | 15/Dec/20144 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    By 2020, we are expected to have 50 billion connected devices. Will European Telecoms firms monetize the explosive growth of Internet of Things? The next five years will be critical. In the long term, much may depend on the development of 5G technology.

    Like many memes which originate in the web domain (for example Web 2.0), Big Data has an impact on the Telecoms industry. However, unlike Web 2.0 (which is mostly based on the advertising business model), Big Data has wider implications for many domains (for example healthcare, transportation etc).

    The term Big Data is now (2014) quite mature. But its impact is yet to be felt across many verticals over the next few years. While Telecoms is also a vertical, it is also an enabler of value for many industries. Hence, there are many areas where Telecoms will interplay with Big Data.

    Based on my teaching at Oxford University and the City Sciences program at UPM – Technical University of Madrid – Universidad Politécnica de Madrid, I propose that the value of Big Data for Telecoms lies in IoT (Internet of Things)

    IoT is huge, but how huge?

    [Tweet “By 2020, we are expected to have 50 billion connected devices”]

    To put in context: The first commercial citywide cellular network was launched in Japan by NTT in 1979. The milestone of 1 billion mobile phone connections was reached in 2002. The 2 billion mobile phone connections milestone was reached in 2005. The 3 billion mobile phone connections milestone was reached in 2007. The 4 billion mobile phone connections milestone was reached in February 2009.

    So, 50 billion by 2020 is a massive number, and no one doubts that number any more. But IoT is really all about Data and that makes it very interesting for the Telcos. Data is important, but increasingly it is also freely available.

    Customers are willing to share data. Cities are adopting Open Data initiatives. Big Data itself is based on the increasing availability of Data. IoT is expected to add a huge amount of data too.

    But, who will benefit from it and how?

    There is a phrase variously attributed to Oil Magnate J Paul Getty – ‘The meek shall inherit the earth, but not its mining rights’. In other words, Data will be free, available and Open, but someone will make money out of it. No doubt, the web players and various start-ups will all monetize this data. But how will Telecoms?

    [Tweet “Looking at the business case for Big Data and IoT, the next five years are critical for Telecoms.”]

    Here’s why. IoT connectivity will come in two forms: Local area connectivity and Wide area connectivity. Bluetooth 4.0 and iBeacon will provide the local area connectivity. We can expect that from 2015 onwards – most devices retailers will support Bluetooth 4.0.

    But the wide area connectivity will still need 5G deployment, which is also the most logical candidate for wide area IoT connectivity. And therein lies the value and business case for Big Data for Telecoms: 5G will be needed to connect the ‘IoT islands’ over the next years.

    Will Telecoms monetize IoT ?

    Time will tell. Specifically the next five years since most analysts predict that 5G deployments will take place in 2020 and beyond.

    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    Cognitive Computing: Benefits and challenges of the next tech revolution

    Co-authored with Mateo Valero Cognitive Computing is going to transform and improve our lives. But it also presents challenges that we need to be conscious of in order to make the best use of this technology. Big Data technology allows companies to gai [read more]
    byJordi Torres | 13/Apr/20145 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    Co-authored with Mateo Valero

    Cognitive Computing is going to transform and improve our lives. But it also presents challenges that we need to be conscious of in order to make the best use of this technology.

    Big Data technology allows companies to gain the edge over their business competitors and, in many ways, to increase customer benefits. For customers, the influences of big data are far reaching, but the technology is often so subtle that consumers have no idea that big data is actually helping make their lives easier.

    For instance, in the online shopping arena, Amazon’s recommendation engine uses big data and its database of around 250 million customers to suggest products by looking at previous purchases, what other people looking at similar things have purchased, and other variables.

    They are also developing a new technology which predicts what items you might want based on the factors mentioned above and sends it to your nearest delivery hub, meaning faster deliveries for us.

    To do so, they are using predictive models, a collection of mathematical and programming techniques used to determine the probability of future events, analyzing historic and current data to create a model to predict future outcomes.

    Today, predictive models form the basis of many of the things that we do online: search engines, computer translation, voice recognition systems, etc. Thanks to the advent of Big Data these models can be improved, or “trained”, by exposing them to large data sets that were previously unavailable.

    And it is for this reason that we are now at a turning point in the history of computing. Throughout its short history, computing has undergone a number of profound changes with different computing waves.

    In its first wave, computing made numbers computable.

    The second wave has made text and rich media computable and digitally accessible. Nowadays, we are experiencing the next wave that will also make context computable with systems that embed predictive capabilities, providing the right functionality and content at the right time, for the right application, by continuously learning about them and predicting what they will need.

    For example identify and extract context features such as hour, location, task, history or profile to present an information set that is appropriate for a person at a specific time and place.

    The general idea is that instead of instructing a computer what to do, we are going to simply throw data at the problem and tell the computer to figure it out itself.

    We changed the nature of the problem from one in which we tried to explain to the computer how to drive, to one in which we say, “Here’s a lot of data, figure out how to drive yourself”. For this purpose the computer software takes functions from the brain like: inference, prediction, correlation, abstraction, … giving to the systems to possibility to do this by themselves. And here it comes the use of cognitive word to describe this new computing.

    These reasoning capabilities, data complexity, and time to value expectations are driving the need for a new class of supercomputer systems such as those investigated in our research group in Barcelona.

    It is required a continuous development of supercomputing systems enabling the convergence of advanced analytic algorithms and big data technologies driving new insights based on the massive amounts of available data. We will use the term “Cognitive Computing” (others use Smart Computing, Intelligent Computing, etc.) to label this new type of computing research.

    We can find different examples of the strides made by cognitive computing in industry. The accuracy of Google’s voice recognition technology, for instance, improved from 84 percent in 2012 to 98 percent less than two years later. DeepFace technology from Facebook can now recognize faces with 97 percent accuracy.

    IBM was able to double the precision of Watson’s answers in the few years leading up to its famous victory in the quiz show Jeopardy. This is a very active scenario.

    From 2011 through to May 2014, over 100 companies in the area merged or were acquired. During this same period, over $2 billion dollars in venture capital funds have been given to companies building cognitive computing products and services.

    Cognitive Computing will improve our lives. Healthcare organizations are using predictive modeling to assist diagnosing patients and identifying risks associated with care. Or farmers are using predictive modeling to manage and protect crops from planting through harvest.

    But there are problems that we need to be conscious of. The first, is the idea that we may be controlled by algorithms that are likely to predict what we are about to do.

    Privacy was the central challenge in the second wave era. In the next wave of Cognitive Computing, the challenge will be safeguarding free will. After Snowden revelations we realize it’s easy to abuse access to data.

    There is another problem. Cognitive Computing is going to challenge white collar, professional knowledge work in the 21st century in the same way that factory automation and the assembly line challenged blue collar labor in the 20th century.

    For instance, one of the Narrative Science’s co-founder estimates that 90 percent of news could be algorithmically generated by the mid-2020s, much of it without human intervention. Or researchers at Oxford published a study estimating that 47 percent of total US employment is “at risk” due to the automation of cognitive tasks.

    Cognitive Computing is going to transform how we live, how we work and how we think, and that’s why Cognitive Computing will be a big deal. Cognitive computing is a powerful tool, but a tool nevertheless – and the humans wielding the tool must decide how to best use it.

     

    photo credits: Robert Course-Baker

    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark
  • Data Economy

    Cognitive Computing: Benefits and challenges of the next tech revolution

    Co-authored with Jordi Torres Cognitive Computing is going to transform and improve our lives. But it also presents challenges that we need to be conscious of in order to make the best use of this technology. Big Data technology allows companies to gai [read more]
    byMateo Valero | 13/Apr/20147 min read
    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark

    Co-authored with Jordi Torres

    Cognitive Computing is going to transform and improve our lives. But it also presents challenges that we need to be conscious of in order to make the best use of this technology.

    Big Data technology allows companies to gain the edge over their business competitors and, in many ways, to increase customer benefits. For customers, the influences of big data are far reaching, but the technology is often so subtle that consumers have no idea that big data is actually helping make their lives easier.

    For instance, in the online shopping arena, Amazon’s recommendation engine uses big data and its database of around 250 million customers to suggest products by looking at previous purchases, what other people looking at similar things have purchased, and other variables.

    They are also developing a new technology which predicts what items you might want based on the factors mentioned above and sends it to your nearest delivery hub, meaning faster deliveries for us.

    To do so, they are using predictive models, a collection of mathematical and programming techniques used to determine the probability of future events, analyzing historic and current data to create a model to predict future outcomes.

    Today, predictive models form the basis of many of the things that we do online: search engines, computer translation, voice recognition systems, etc. Thanks to the advent of Big Data these models can be improved, or “trained”, by exposing them to large data sets that were previously unavailable.

    And it is for this reason that we are now at a turning point in the history of computing. Throughout its short history, computing has undergone a number of profound changes with different computing waves.

    In its first wave, computing made numbers computable.

    The second wave has made text and rich media computable and digitally accessible. Nowadays, we are experiencing the next wave that will also make context computable with systems that embed predictive capabilities, providing the right functionality and content at the right time, for the right application, by continuously learning about them and predicting what they will need.

    For example identify and extract context features such as hour, location, task, history or profile to present an information set that is appropriate for a person at a specific time and place.

    The general idea is that instead of instructing a computer what to do, we are going to simply throw data at the problem and tell the computer to figure it out itself.

    We changed the nature of the problem from one in which we tried to explain to the computer how to drive, to one in which we say, “Here’s a lot of data, figure out how to drive yourself”. For this purpose the computer software takes functions from the brain like: inference, prediction, correlation, abstraction, … giving to the systems to possibility to do this by themselves. And here it comes the use of cognitive word to describe this new computing.

    These reasoning capabilities, data complexity, and time to value expectations are driving the need for a new class of supercomputer systems such as those investigated in our research group in Barcelona.

    It is required a continuous development of supercomputing systems enabling the convergence of advanced analytic algorithms and big data technologies driving new insights based on the massive amounts of available data. We will use the term “Cognitive Computing” (others use Smart Computing, Intelligent Computing, etc.) to label this new type of computing research.

    We can find different examples of the strides made by cognitive computing in industry. The accuracy of Google’s voice recognition technology, for instance, improved from 84 percent in 2012 to 98 percent less than two years later. DeepFace technology from Facebook can now recognize faces with 97 percent accuracy.

    IBM was able to double the precision of Watson’s answers in the few years leading up to its famous victory in the quiz show Jeopardy. This is a very active scenario.

    From 2011 through to May 2014, over 100 companies in the area merged or were acquired. During this same period, over $2 billion dollars in venture capital funds have been given to companies building cognitive computing products and services.

    Cognitive Computing will improve our lives. Healthcare organizations are using predictive modeling to assist diagnosing patients and identifying risks associated with care. Or farmers are using predictive modeling to manage and protect crops from planting through harvest.

    But there are problems that we need to be conscious of. The first, is the idea that we may be controlled by algorithms that are likely to predict what we are about to do.

    Privacy was the central challenge in the second wave era. In the next wave of Cognitive Computing, the challenge will be safeguarding free will. After Snowden revelations we realize it’s easy to abuse access to data.

    There is another problem. Cognitive Computing is going to challenge white collar, professional knowledge work in the 21st century in the same way that factory automation and the assembly line challenged blue collar labor in the 20th century.

    For instance, one of the Narrative Science’s co-founder estimates that 90 percent of news could be algorithmically generated by the mid-2020s, much of it without human intervention. Or researchers at Oxford published a study estimating that 47 percent of total US employment is “at risk” due to the automation of cognitive tasks.

    Cognitive Computing is going to transform how we live, how we work and how we think, and that’s why Cognitive Computing will be a big deal. Cognitive computing is a powerful tool, but a tool nevertheless – and the humans wielding the tool must decide how to best use it.

     

    photo credits: Robert Course-Baker

    FacebookTwitterGoogle+WhatsAppEvernotePocketKindle ItBufferLinkedIn
    x
    Bookmark