Isn’t it about time we stopped talking about improving our TV rating system and did something about it? Under ideal circumstances, advertisers would like the networks and other national time
sellers to guarantee audience delivery for their commercials, but beyond that, it’s up to the ad messages to convey their claims effectively and motivate viewers to try the brand.
What we have instead is “average commercial minute audience” ratings, which vastly overstate the actual viewership of the ads, and are not ad specific. As we have long
noted, the Nielsen people meter system has no basis for determining whether a person who has identified him or herself as a program viewer when the channel is first selected is still in the room --
let alone “watching”-- later, when a commercial break appears.
Unless the people meter panel member indicates a cessation of viewing when a break begins, which
almost never happens, the system assumes that all the ads were seen, on a second-by-second basis. Yet evidence from numerous observational studies tells us that this is not the case. In reality,
commercial “viewing” is being overstated by 50% or more, with variations by program type, the amount of ad clutter in the break, the inherent appeal of the ads aired, etc.
Clearly this is a problem.
It is therefore incumbent on national TV buyers and sellers to use new technologies to create a parallel system. Nielsen data
would continue to be used to measure TV set usage and claimed program viewing per commercial minute --more or less in the current manner -- but a second service would provide commercial-by-commercial,
eyes-on-screen ratings for each telecast. Such findings would be used to “adjust” the Nielsen ratings downward for each advertiser’s buy, to reflect actual, not assumed
If a brand buys a schedule of :15s and :30s on a network in 15 to 20 different shows, and expects this to deliver about 200 18-49 GRPs, but the average ad viewing
factor for these placements is 30%, then the buyer would realize that only 60 real world commercial viewing GRPs are being delivered. This is the result the seller would guarantee, not the 200 GRPs
measured in the old way.
Needless to say, it will come as a shock to many brands that they have been kidding themselves for years, basing their buys on wildly inflated
“commercial minute audiences.” However, once they grasp the significance of real commercial viewing data, they will adjust their thinking. Perhaps they will be motivated to spend more on
national TV to get back (or at least close) to the reach and frequency levels they thought they were attaining.
There will be issues to address. For example, how is commercial
viewing now defined? Is average second eyes-on-screen the best metric? We don’t think so; a certain proportion of the commercial must be seen for a viewer to get enough of a message for the
brand to make a sale. The solution may be determined by correlating second-by-second screen data with commercial recall findings from the same panel members. From this research, it may be found that
having eyes-on-screen for 50% or more of a commercial’s content is a more realistic metric than the average-second ratings, but this would take some work to determine.
Another issue concerns the size and composition for the eyes-on panel. Clearly, it is going to show much smaller ratings than the people meter panel, yet it is unrealistic to expect that this can
be compensated for by using a panel double or triple the size of Nielsen’s. That would be prohibitively expensive. So, a compromise would be needed, perhaps settling on an eyes-on panel of
20-25,000 homes, which measures all content shown on their screens, including SVOD, OTT, etc.
Obviously, such a panel is not going to provide statistically reliable
break-by-break, eyeson-screen ratings by demos for programs garnering per minute audience ratings as low as .1-.2%, and there are many of these, especially on cable. In such cases, buyers may have to
be satisfied with total schedule eyes-on ratings on a gross basis, where every set usage instance is counted, but not broken out individually. Even in situations where rating size is less of a
concern, the data must be used sensibly; if necessary, with aggregate findings across a skein of episodes used as a substitute for more detailed but potentially unstable data for each commercial
placement. Finally, all findings must be on an absolute basis—what percentage of the audience “saw” the ad—not on a relative basis, which would only allow comparisons to the
breaks when each commercial appeared.
Can such a system be developed?
There is no doubt that it can be done, not way off in the future, but right now.
The question is, will advertisers and their agencies take the required initiative to really improve the relevance of their national TV buys, including funding research, to come up with an acceptable
definition of true commercial exposure? Or do we just keep talking about it, but do nothing?
This commentary was originally published by Media Dynamics.