Thanks to CIMM (Coalition for Innovative Media Measurement), a division of ARF (Advertising Research Foundation), the industry now has an incredibly comprehensive and detailed report on the value and projectable costs of panel-based audience measurement.
Entitled "Who’s Counting? The Future Role and Value of Panels in US TV Measurement," an alternative title would be "Everything You Needed to Know About Panels in TV Measurement but Were Afraid to Ask!"
At 72 pages, this report is not light reading, even for the media research cognoscente.
Co-authored by Data ImpacX CEO Joan FitzGerald and Anonymous Media Research CEO Jonathan Steuer, the report not unsurprisingly found that “Panels look set to remain a critical ingredient to measurement for the foreseeable future, enabling solutions for use cases that cannot be addressed by big data alone.
advertisement
advertisement
“Although datasets from set-top boxes, smart TVs, and other sources have tremendous utility for measurement, panels remain vital for calibrating, enriching, and commingling these datasets and enabling measurement of actual viewers rather than households. Panels will continue to play a vital role in measuring viewing on unconnected TV platforms and of over-the-air channels.”
The elephant in the room is addressed via the report’s emphasis on the value of counting people rather than devices, as well as the importance of eyes/ears-on measurement capability.
The paper also underlines the value of real people-based panels for demographic adjustment and for cross-platform training as part of the device/surface measurement being used in the modeling approaches by Project HALO (World Federation of Advertisers [WFA]), Project Origin (U.K.’s Incorporated Society of British Advertisers [ISBA]) and Aquila (U.S.’s Association of National Advertisers [ANA]).
These current industry projects are focused on “getting the counting right.” They are based on measuring content rendered on devices/surfaces, because it is machinable, fast, and relatively low cost in a digital environment. They put aside the fundamental value of panels and various techniques to measure people’s actual viewing needed to significantly enhance Big Data sources and/or “faux individuals” via virtual identities (VIDs).
In the TV/video cross-platform and cross-media measurement arenas, have we conveniently sidelined The Attention Council’s mantra: “No attention, no outcomes?”
Have we also ignored the principle that using AI to enrich Big Data sets and VIDs are only as good as the training models, i.e., exquisite representative people panels?
Measures of devices/surfaces – a.k.a. “content rendered counts” -- or the Media Rating Council’s so-called “viewable impressions” -- have no empirical persons eyes/ears-on measurement. Consequently, any inferences of generating an opportunity-to-see (OTS) from “content rendered counts,” even hypothetical OTS vs. what have been called “Real OTS” or “viewing” measures, are moot.
While theoretical claims of OTS can be made for smartphones and desktops based on device/surface measures, they cannot be made for the in-home, traditional multi-set, multiple persons TV environment due to the importance of differentiating real viewers (Real OTS) per viewing household, nor for other non-digital media.
In this report, two use cases -- “personification” and “attention and engagement” -- emphasize using panels to get beyond “content rendered” and to understand at a minimum OTS, and in the case of “attention,” to understand eyes/ears-on, as well. Another part specifically calls out OTS concerns.
If persons-based eyes/ears-on TV viewing is not measured -- whether via a panel or other empirical means -- are solely device/surface-based content measures an acceptable planning or buying surrogate? Absolutely not.
This report makes it clear that both “personification” and eyes-on/attention are important and that inclusion of sophisticated panel-based measurement as part of any integrated approach is likely the only way to address those use cases despite the costs.
So, will the WFA’s “Project HALO”, ISBA’s “Project Origin,” and the ANA’s “Project Aquila” take hold? Yes, because they are essentially funded and managed by Google and Meta, to serve their interests vs. TV (linear and streaming) and other media.
The basic metric will be at the content-rendered-counts level and therefore will report only anecdotal OTS, at best.
Will the industry fully embrace this comprehensive CIMM TV measurement report on the value and dimensions of panels despite the potential cost of doing it right? Doubtful.
Will it embrace people panels measuring eyes/ears-on based TV/video viewing metrics in the ad-attention economy so media agencies can buy what they plan and plan what they buy? The elephant holds the answer.
Good report, Tony. And one that is encouraging to see. A crucial point, which may not be included in the report, is that a panel is the only way at present to determine who watched any particular bit of content---be it program or commercial. The people meter button pushing system does not do this and produces inflated viewer- per- minute estimates. We cover all of this in some detail in our soon to be offered for subscription---at a reasonable price---report on TV attentiveness. Stay tuned.
Yes Tony, a good report. And promising
I agree with Ed that it is encouraging. I also agree with Ed that PROPERLY MANAGED panels are currently the best way to measure viewing and duration. I'm not 100% in agreement that People Meters "produce inflated viewer-per-minute estimates.
Yes, the meter button could be active with the person after they haved leaved the room and falsely inflate viewing estimates. However, in the average US home of around 2.45 people, a significant proportion of homes will have teens etc. that the couldn't be bothered pressing the meter button which conversely also MISSES genuine viewing. My guess would be a variance from reality of 0.1 to 0.2 'viewers'.
John, the basic issue with the people meters is that they report that 90-95% of the commercials that appear on their panel's TV screens are "watched"by people who claimed that they are program viewers. This simply can't be so. The people meter is a 50 year old solution to the problems that diaries were having in the late -1960s and 1970s as more homes qcquired second and third sets--which meant that the diary keeper didn't necessarily know who was "watching" and as cooperation rates for diary studies fell. It was never intended to provide a second by second estimate of viewing---only set usage.
Even if younger panel members fail to press their buttons as often as desired to indicate that they are "watching" a show at when the channel is first selected---such "audiences" are the least likely to remin in the room during commercials or to pay attention to them by a wide margin---per TVision. And they are, no doubt, the least inclined to follow the system's instructions--despite the prompts---to report any cessationof "viewing" while the show is on-screen.
The solution is obvious. Drop the button pushing part of the design and use something like TVision's observational methodology to measure exactly what is happening second by second for viewing as well as set usage.
Yes Ed, I acknowledge your response.
My comments were based on AU TV research. For example, many years ago the MFA wanted to get a gauge as to the 'drop-off' in ad breaks. I was supplied a week's data for Sydney (the biggest panel in AU) and had to do the calculations (on a minute-by-minute basis) for all broadcast channels. From memory (as I lost the report in the bushfire some years ago) the drop-off in the demos ranged from around 4% to 8% in the breaks. If Little Johnny didn't like the lounge-room TV and slinked off into the TV in his bedroom. Not perfect but better.
OzTAM also progressed by installing VPNs in a significant number of measured homes (it was not forced on panellists) so that allowed an element of de-duplication and could pick up streaming as well as broadcast so that dual registration was corrected (the old source was removed even if Little Johnny didn't press the button on the TV meter). It is also able to account for, say, watching on the 'phone and interruptions such as emails and calls and that can be adjusted. Having said that it does not account for falling asleep on the lounge!
Comparing to other measurements such as Press and Magazines TV is more robust. When we designed MOVE for OOH we 'layered' the results from OTS to LTS - a great improvement. Duplication of Outdoor usage (and the LTS/OTS ratio) is very hard to measure but collecting sufficient 'panellists' provides data that allows a model of frequency.
The current Digital measurement provides lots (too much??) of usage data by from participants . However I find that the Digital reporting and their thresholds to define Usage are of very little campaign planning. For example the threshold to be counted as a User is 2 seconds. The Public Reporting is usually calendar month. Data I have seen in AU produces very flattering Reach %ages that often. Given it is the most immediate electronic medium, it seems to be the slowest and the most flattering media research data.