Big Differences In Data Internet Video Could Make Traditional TV Metrics Look Positively Upstanding

Imagine if Nielsen said that "American Idol" got 30 million viewers for a particular episode -- but another competing research company said the show pulled in 12.5 million for the same episode.

Maybe a third company said neither is right -- that the biggest TV show in the land pulled in 48 million viewers. (A senior executive of the company that owns "Idol"'s producer, 19 Entertainment, believes that viewers were under-reported, especially since some 64 million votes were cast for a recent episode.)

All this, in effect, is similar to the mixed viewing signals sent by Internet researchers concerning online video destinations. A recent story in the New York Times shows why traditional marketers who use TV may be reluctant to spend large chunks of money in the new digital space -- even with the promise of more ROI coming from the online platform. The story notes that Nielsen Online reported 8.9 million visitors to Hulu in March, while another measurement firm, comScore, counted 42 million.

There's disparity for you. With numbers like these, traditional TV metrics may look downright upstanding.

Traditional TV research critics might still say Nielsen can't count of its way out of a paper bag. Or make that paper diaries, which are still used for local TV measurement in a large part of the country.  

One could surely argue TV still needs a strong competing viewing research service. Set-top-box data is surely a welcome addition. But will there be these kinds of like-to-like data differences?  We think not. Just take a look at recent TNS Direct View versus Nielsen data here.

If you wonder why some TV marketers are moving back to TV during this recession, or keeping some money on the sidelines that would have gone to new digital platforms, one only needs to see what Hulu and some other Internet video providers are going through.

Surely, cable and syndication TV in the '80s and early '90s went through some weird bumps when it came to their young and growing viewership -- volatility that, at least in cable's case, had to do with the size of those networks' respective universes.

Advertisers still bought media on those platforms, with the promise of better days to come. The big question:  Will online video advertisers do the same now, with billions of dollars of media budgets on the line in an increasingly fractionalized marketplace



4 comments about "Big Differences In Data Internet Video Could Make Traditional TV Metrics Look Positively Upstanding".
Check to receive email when comments are posted.
  1. David kohlberg Kohlberg from blogher, May 18, 2009 at 12:55 p.m.

    This is not just true for online video, but the company I work for, which is a blog network also has few skewed reporting from the different online measurement companies!

  2. Aaron B. from, May 18, 2009 at 2:08 p.m.

    Well, at least they didn't measure their numbers by "internal tracking," which could mean anything under the sun.

    But as far as online viewership goes, while there's certainly a disparity between research groups, producers also have to figure out what to focus on: Unique Visitors, Unique Streams, Total Streams, Total Streams from key locations (workplace and college campuses)... these measurements and more often vary between research firms, and beyond that, I'm sure some people have differing definitions on what makes a website browser "unique."

  3. Tom Francoeur from Communispace, May 18, 2009 at 5:34 p.m.

    For a related theme, check out today's Video Insider article from SeaChange's Simon McGrath.

    Why David vs Goliath doesn't apply in the case of online video advertising vs TV:

  4. Martin Russ from Freelance Technical Author, May 20, 2009 at 11:41 a.m.

    Measuring just about any aspect of a network as enormous and complex as the Internet is always going to be a challenge. Multiply this by the variability and unpredictability of human beings and you have a lot of scope for variation. Then add in the ease with which current technology seems to allow us to almost instantly and effortlessly do things online that used to require dedicated hardware and processes, and you have a recipe for huge variation in results leading only to indecision.

    Pitching one set of results against another may well cause conflict and thus distract us from finding a solution. Given several differing reports, do you try and decide which one is correct, or do you go with the one you are most familiar with, or the one you trust, or just instinct? Often 'the thing not to do' is to not make the decision at all.

    As with many online measurements, the key could be to standardise on one tightly defined technique and then apply that uniformly. That way there's one common agreed measure. Otherwise we risk straying into the subjective world of 'Who's the best movie actor?'...

Next story loading loading..