Like everything else in the digital metrics space, the conference has evolved from a focus on more traditional audience measurement, to a focus on analytics, Big Data and data science. Throughout the conference I was keenly aware of the culture clash between the media researcher and the data scientist. And I wondered if there really needs to be a clash.
So I’ve been thinking a lot about the deployment of Big Data assets in the digital space. Clearly, it is one of the most profound developments in digital metrics — and indeed, in our lives. The Internet of Things is already here; we can pay with our watches, and we’ve got Google thermostats.
But in our space, I worry that there is too much emphasis placed on “Big Data,” and not enough on “Good Data.” Perhaps here the data scientist can learn from the media researcher.
Back in the ‘80s, when I was a young media researcher at Arbitron, we knew all about data. And we generally turned our noses up at it. On the path to informed decision-making, data was a bottom-funnel asset. Our mandate was to turn data into information, and information into knowledge. The notion of data in and of itself as some sort of end game would have struck us as preposterous.
Data has become ubiquitous; we’ve all heard the statistic that 90% of the world’s data has been created in the last two years. Nowadays you can’t swing a dead cat without generating data (especially if the cat is wearing a Fitbit.) We have so much data at our fingertips now that I wonder if maybe we think we are a lot smarter than we actually are. We mistake data for information. And if the data is big enough, we often mistake it for knowledge.
Suppose we had a pool of tens of millions of cookies with demographic data appended. With a footprint that large, surely we would be able to do targeting, modeling, and attribution in a profoundly efficient manner, no? Indeed the digital business is now built on the back of such data sets.
But what if it turns out that this particular data source gets gender right 50% of the time? (That’s the same rate of success a chimpanzee would have flipping a coin.) Do we still think that “Big Data asset” will improve our business decision-making? What if I told you that my company just came across such a situation?
In my email this morning was a link to a brief interview with recently retired Turner Chief Research Officer Jack Wakshlag, on the topic of research versus analytics. It’s worth a listen. At 4:10, Jack says, “There are things that traditional media research people do, that those who are involved in data analytics generally don’t do… media researchers, for example, spend a lot of time examining the quality of their data, The people in data analytics are expert mathematicians and statisticians, but may not have the history of trying to figure out, or study, the quality of the data…. Those two traditions, if they come together, can be very powerful.”
Jack is right. There is a synthesis to be forged between the disciplines of media research and data science that will serve us all.
For example, set-top-box data is an invaluable asset in building contemporary TV and cross-platform measurement solutions—but such data tells you nothing about who is in front of the set. The data scientist would say, that can be modeled; the media researcher would argue for a high-quality, projectable, metered panel to understand how people are using the medium. And it turns out, they’re both right: model the demography, but do it based on the training set of a quality panel.
We need to bring the same pragmatism to Big Data solutions that we have always brought to audience measurement solutions. We can turn data into information, information into insight, and insight into better business decision-making. But that’s something we —the media researchers, the data scientists — must do together.