Commentary

TV May Be Everywhere, But Research Is Nowhere

In the 1960s, media philosopher and visionary Marshal McLuhan, dubbed the new electronic media world of television a global village. For the first time, people could see and hear live events as they were happening. Nearly 50 years later, the global village has become a personal village.

People still have access to the same information at the same time, but they no longer have to access it at the same time, at the same speed, on the same platform, or even on the same device. Yet the basic unit of TV audience measurement, used as marketplace currency, has remained essentially unchanged.

Our industry now finds itself at a crossroads. The way viewers consume video-based media is changing at an unprecedented pace. But the way video media is bought and sold can't really change significantly unless the way audiences are measured can keep up. Well, it's not keeping up, and at this rate, in a couple more years it will be significantly behind the curve.

Even though the capability now exists, Nielsen's meters were never really designed to measure individual minutes. They were designed to measure half-hour or hour blocks of time. In today's remote control, pause, and high-speed fast-forwarding television environment, we should not just assume that Nielsen's reported measurement is even close to capturing real viewing activity during a given minute. We should be testing multiple definitions of minute viewing to better represent the real world.

advertisement

advertisement

Relatively small samples simply cannot accurately measure television program viewing in today's 200+ channel television environment. Samples are based on the notion that someone who fits a certain profile (that research indicates impacts media behavior) represents a lot of other people with the same profile. This works on a broad media basis. Nielsen television usage data is remarkably accurate for total people and broad demographic segments on a total day basis. But as the demos and dayparts get narrower, sample-based data is not nearly as good.

Given DVRS and hundreds of channels, when it comes to measuring specific network or program viewing, Nielsen's limited national sample can no longer be projected to total U.S. viewing levels. It might accurately indicate that I'm watching something, but not necessarily what I'm watching.

This is why some form of census-based measurement (i.e., set-top-box data) is essential to accurately measure real TV viewing. But set-top data is nowhere close to being currency. There is no national footprint, and no standards for reporting the data. Plus, the industry is still stuck in the muck of broad age/sex demographic planning and buying -- and CPMs, which still hover over innovative measurement like an albatross.

Viewing has also changed -- radically. If ratings remain the audience metric, should the cost-per-rating point (or viewer) be the same for the same content regardless of platform? Or is one platform inherently more valuable?

I wonder if the average viewer will put up with the same commercial loads online as in the TV broadcast, particularly if they can't fast-forward the ads. A PC or laptop is a fundamentally different medium from a television set. The computer is much closer to my face, and I have different expectations and different tolerance levels for program interruptions.

Television/video usage is dramatically changing. There is more granular and dispersed data than ever before, and decisions regarding measurement methodology are about to be made that will affect audience research and marketplace currency for years to come. Unfortunately, these decisions will be largely controlled by the sellers. This should be a major concern to advertisers. There are no longer enough authoritative and influential research voices on the buy side to balance the scales.

The idea that delayed viewing would ever be enough to cause the real commercial audience for regular prime-time series to be higher than the live program audience is absurd. Does the fact that Nielsen says it's so validate Nielsen's own measurement? To me, it indicates that current measurement of individual minutes and how fast-forwarding through commercials is captured are fundamentally flawed.

Which brings us to another point. Is it in the realm of possibility that when people watch prime-time series via their DVR, they actually watch 40% of the commercials? Does the fact that TiVo and other set-top suppliers indicate the same thing provide validation for this ridiculous notion? It means we have a lot more work to do in figuring out how to accurately measure minute and sub-minute audience data.

2 comments about "TV May Be Everywhere, But Research Is Nowhere".
Check to receive email when comments are posted.
  1. Steve Sternberg from The Sternberg Report, April 2, 2010 at 12:35 p.m.

    For the full analysis check out my blog at http://www.tinyurl.com/tevesternberg

  2. John Grono from GAP Research, April 19, 2010 at 12:16 a.m.

    And just to clarify, Nielsen mters (and every other peoplemeter I have ever worked with) DO track data (tuning and viewing) at lower than the sub-minute level. When I first used the Nielsen meter in the early '90s it tracked at every 2.7 seconds (22 polls a minute - no idea why it was 22).

    The reason for this was to try to provide a 'representation' of viewing and not to track each and every channel change and viewer change. What was found that people "hunted" around channels, and the objective of the system was to measure the programmes that people were watching (i.e. the buyable unit). All these "change-lines" simply muddied the analysis. So, the decision was made to 'aggregate' the change lines to the minute-by-minute level. This was done either be using the "dominant" channel (the one with the highest count of the 22 polls each minute"), or the channel tuned at the middle poll (the mid-minute).

    The other reason, is that with "drift" in the clocks (yes they were synchronised every day) the second-by-second data risked mis-attribution. Further, with re-broadcast, there were latency issues which further muddied the waters.

    The final step was then to aggregate those minutes up to the programme level, using programme lineups provided by the broadcasters. So, you can see there was a lot of reason and logic behind the approach taken.

    Of course, now were are in the 2010s, the sample sizes are creaking at the seams. Clearly we need to work out ways to "clean-up" STB channel tuning data and merge that with people-based viewing data in a hybrid solution.

    I hope this provides some much-needed clarity on the task, the technology, and the methodologies.

Next story loading loading..