The key players involved in the study were the CRE, the Ball State University Center for Media Design, and Sequent Partners. The study was conducted in 2008, and its findings released to the industry in early 2009.
While the CRE was designed as an independently operated group (which was basically set up so Nielsen could avoid government intervention in how it measures media audiences), Nielsen funded the research, at a cost of about $3.5 million. Such a major expenditure for one research study is simply out of the ballpark for any agency or network to conduct on its own — one of the key benefits of the CRE.
One of the lesser-known findings of the study was that while Nielsen’s broad television usage data for households and demos such as Adults 18-49 were remarkably accurate, as the audience segments got narrower, the gap between reported Nielsen data and observed behavior got wider.
At the time, I suggested an analysis that I thought could provide definitive insights into the accuracy of Nielsen’s reported ratings. Nielsen, of course, did not want to have anything to do with this, and we moved on to other things (as usually happens when Nielsen wants to move on to other things).
I bring this up because at the time, it seemed as though the CRE was the one industry entity capable of doing real, worthwhile independent research, designed solely to advance how audiences are measured, without any sales-related agenda or bias. In retrospect, perhaps that was a naïve notion (even though we original CRE members had pledged to do just that).
In the past, Nielsen has made some attempts to “validate” its currency audience data through telephone coincidentals and the like. My proposal was a little different. I wanted Nielsen to meter the homes of research executives from the top 20 or 30 media agencies, all the broadcast networks, and major cable networks (anywhere from 50-100 people).
For one day, these execs would write down their media behavior in minute detail: what they were watching, platforms they were using, whether they were time-shifting, when they’re fast-forwarding through commercials, when they switch channels, which commercials they're seeing, etc. Different research executives who participate can set up their own scenarios they think might present measuring difficulties, then simply compare their actual viewing to what Nielsen reports.
Unlike regular viewers, senior researchers are used to doing such detailed work, and should have no problem accurately recording their activity. This can be done not just for Nielsen, but comScore -- and any other company that claims it can measure video audiences.
Not only will this provide, for the first time, a look at how accurate reported ratings are, but it will also tell us exactly where improvements to audience measurement need to be made (which I believe was the original purpose of the CRE).
This analysis should be overseen by an un-biased third party with no stake in the results — perhaps a small group of former industry researchers who are no longer working for a buyer, seller, programmer, measurement service, or producer of video content.
As the industry seems to be hurdling (stumbling?) toward “total” audience measurement and TV everywhere, we should pause for just a moment to see whether we are measuring TV anywhere correctly.