Over a half century ago, TV measurement was invented. Advertisers wanted to know whether their TV ads were effective. How to define “effective”? The ultimate answer: Did the ads drive consumers to action, to buy the product or service being advertised? This kind of detailed information was simply not available, so the industry settled for a weak proxy: Were my ads even seen?
A sampling system was set up to monitor if the TV program was watched by a small number of panelists who had an “opportunity to see” the ads And these panelists had to be actively engaged, raising all kinds of biases.
Marketers still had no idea if their marketing objectives were being met, let along if those panelists even bought the product being advertised. But alas, a system was born and still exists today, supporting our $140 billion global TV advertising industry.
But the industry has not given up pursuing those ultimate questions. Hence, 40 years ago the quest for single-source data began. Singles source is the idea of measuring data from the same homes over time to see which ads homes receive and how those ads change their buying behavior. Single-source data can also determine the best media to drive various purchasers, such as the type of purchasers that will more likely buy a brand after being exposed to its current ads -- we call that the ROI-Driving Segment.
The vision of single source turned out to be elusive, as huge companies and leaders in our industry kept going into the single-source business then out of it again fairly rapidly, most often after ineffectively having spent up to hundreds of millions of dollars in their failed attempts (e.g., ScanAmerica, Project Apollo, etc.).
Single-source data was expensive: panelists needed to be persuaded to do a lot of work recording their purchasing and viewing habits, and special equipment supplemental to what already may have existed had to be installed in their homes. As a result, despite high expenditures, the resulting sample sizes tended to be so small that the findings were statistically insignificant for most brands and for most TV networks in today’s highly fragmented viewing world. This approach continued for 20 years, with the assumption being, “if we throw enough money at the approach,” the vision of single source could be forcibly engineered.
Later other companies had a different idea: Why not use naturally occurring data (NOD) -- hard data that was being regularly collected, without burdening consumers. Why not match data across households? Why not build an easy-to-use computing system to access this data and find the answers? .NET software had evolved to the point that this was possible. Storage costs had come down dramatically. And most importantly, these big digital data sets had become available.
In television, NOD is the “hard” data that can come out of the servers and cable/satellite set top boxes. The box is there anyway. The box can datestamp and timestamp channel change and other events and send it up stream to the server. Some cable and satellite operators are already doing this.
In purchase data, NOD is those point of sale “hard” data automatically collected for financial tracking and where the records are household-specific. Instances of this are supermarket frequent shopper cards, which we now see expanding to many other types of stores including department stores, specialty stores, fast food, coffee shops, and so on. For cars and trucks there are registration data. For pharma, there are prescription fulfillment records.
This approach avoids the additional equipment previously required and the need to retain and coordinate homeowners as panelists. The idea revolutionized single source, by exploiting NOD and allowing buyers and sellers to manipulate that NOD to create actionable business metrics.
The solution resulted in gathering sample sizes of Big Data, a huge problem with prior industry attempts, as the gathering of such data does not require installing any new equipment in people’s homes. It does not require giving them a new special credit card, barcode scanner or peoplemeter to use, nor require recruiting panelists. So long as the most rigorous privacy protection is adopted, such as Washington’s “privacy by design” objective (e.g., receiving ISO 27001 certification via continual audits and never accepting names and addresses or other personally identifiable information), NOD can be matched and massive passive single-source data created.
This is now a reality. Dozens of advertisers are currently using an advanced business intelligence platform to mine hard NOD to find their right audience and measure the effectiveness of their advertising. Dozens of TV networks are also using that same platform to demonstrate to the advertisers that their networks are the right place to find their audience. After 40 years, a transparent single-source approach is finally capable of cost-effectively utilizing massively large sample sizes, yielding statistically significant findings for all brands.
In retrospect, the idea may seem obvious -- but the top research companies in the world were apparently blindered into panel-based, old-fashioned equipment installation and panel recruitment approach and did not see the NOD idea.
Today the idea of naturally occurring data is spreading far and wide. The idea promises to revolutionize marketing and media research. Now that you have read the meme “naturally occurring data,” you will probably start to have fresh ideas about how you can use naturally occurring data to drive your business.