Commercial Ratings: From Conceptual Leap To Implementation

Nielsen has taken a bold conceptual leap in how it will measure television viewership. Starting this fall, the company will issue commercial ratings for every program. These ratings will be an average of minutes that include commercials.

This new data "stream" has support from both sides of the industry and conceptually, it's a big step forward, especially for our vendor partners. However, the impact of the data will be much more limited from a practical perspective.

The proposed commercial ratings, being an average of all commercial minutes, will only reduce ratings between roughly -1 percent to -5 percent across genres and dayparts. This will have little impact on the day-to-day business. With all program ratings declining by reasonably similar percentages, the peaks and valleys seen in ratings today will stay the same with the new data. In addition, the resultant rating differences are well within the range of reasonable statistical error when looking at a post-buy analysis.



Nonetheless, the change in philosophy, from advertisers paying for people viewing the program, to paying for people viewing their commercials, is a conceptual shift that should be applauded. Just keep in mind that this data is not the end result, but only a step in the right direction.

What the issue isnot: Nielsen doesn't measure "commercial minutes." Nielsen's measurement of "commercial" minutes starts on the clock minute. while commercials and programming will cross that boundary.

What the issue is: Specific-minute ratings are subject to significant statistical error relative to today's typical audience size. And the audience size, in general, is only getting smaller as both viewing devices and options grow. But where should we draw the line? Is commercial minute average data (what Nielsen is developing) acceptable? As an end, no. But are specific minute-by-minute ratings necessary? For viewing analysis purposes, yes. But probably not from an implementation standpoint. That detail level is subject to too much error.

So where do to draw the line? The issue at hand is to understand audience flow through programming and commercials. We need to evaluate viewing behavior according to consumer behavior. The commercial ratings proposed intentionally dilute audience flow dynamics.

Where We Need To Go

We've identified audience flow patterns through programming and commercials and those variables which predict and drive those patterns. Pod selection becomes evident through this process. It's about where in the program to be--not, necessarily, which genre or program to be in.

Generally speaking, there are three types of rating patterns for programming, with each type containing variations within:

  • Incline (growth in viewing throughout a telecast).
  • Flat (stable viewing throughout a telecast).
  • Decline (loss in viewing throughout a telecast).

Through pod-to-program relational analyses, key variables stand out as drivers on which pattern in a program is associated. The most obvious variable is television usage (or HUT/PUTs). But television usage is not as strong, across-the-board, as one would initially think, nor is it the most important variable. Using minute-by-minute data, we've identified these key variables and the potential impact for fall 2006 programming.

In the end, it's about understanding the habits of the audience (or consumers). This is the next conceptual leap we must make. It's about where audience flow dynamics impact exposure explicitly. This new data is not good enough today. We need to start seeing and implementing the real viewing patterns.

Next story loading loading..