DataBeat 2014: Analytics 3.0 and the Data Bottleneck
The following post explores just one of the many talks at DataBeat 2014. To find out more about DataBeat 2014, check out our recap here.
DataBeat 2014 kicked off with a keynote speech from author Tom Davenport, whose latest book Big Data @ Work has been receiving high praise since its publication. He spoke on the data bottleneck and the emergence of an Analytics 3.0 world. We’ve come a long way from the “Analytics 1.0″ world of the 1990s and earlier, which dealt with small structured data. Those data included easily accessible attributes like customer profiles, billing trends, and various marketing stats. Data scientists of the time (referred to by other names) were focused mostly on descriptive analytics and reporting rather than anything predictive.
We may have left this focus behind, but many companies are still immersed in the Analytics 2.0 era, Davenport says, an outlook, which takes as its subject “big data,” messy qualitative factors like customer experience and customer satisfaction. These data are largely unstructured and often represent a large time investment for data scientists as they struggle to bring data under control in clean datasets.
The Analytics 3.0 world, however, focuses not on big or small data, but the more aptly titled “all” data. Whether the data at hand is structured or unstructured, Analytics 3.0 effectively combines the problems and opportunities of Analytics 1.0 and 2.0, presenting data scientists with big data and small data from a wide variety of sources. In this new world, analytics is used to create data products and support industrialized decision processes. Predictive analytics is also king here.
The problem? The more sources and types of data you have, the more time you’ll spend cleaning your datasets. Already, an estimated 80 percent of a data scientist’s time is spent cleaning and organizing the data at hand via munging, extracting, filtering, and cleaning. Davenport calls this data “plumbing,” the less glamorous side of the sexiest job of the 21st century.
One thing is apparent: We have to get a handle on the time required to clean datasets before the Internet of Things comes fully online. Connected devices already bring all types and formats of data from many different sources. If we wait until the entire world is connected, data science as a field will drown in the deluge. Data scientists, after all, need to be “on the bridge” of an organization, said Davenport. With their skills, it’s important that they be involved in developing product and services for customers, not doing backroom decision making or spending their time cleaning data.
So how can you leave behind Analytics 2.0 and join in with the Analytics 3.0 world? Davenport believes that companies must do two things: attain a basic data management capability and product service innovation, and then adopt a new approach to data integration and curation. Then, and only then, will today’s data scientists overcome the data bottleneck and join in on the era of Analytics 3.0.
Want more from DataBeat 2014? Check out our other posts: