Commentary

It's Time To Apply Digital Video Metrics To TV Ratings

  • by , Op-Ed Contributor, August 31, 2017

The Media Ratings Council is proposing new guidelines for measuring video advertising across screens with the addition of weighting for duration. By adding seconds of duration to the measure of an interactive medium like digital, we are getting closer to rigorously measuring what we want to know. Is the viewer actually watching – or not?

As viewer distractions increase, this becomes more important.

Thanks to the MRC’s introduction of its viewability standard for digital, the quality of digital inventory has helped accelerate the growth of digital video.

If we focus on duration of viewability for cross-screen video, then it makes sense to do the same for the television audience. Measuring duration of "watching" TV would deliver more valuable data to networks and advertisers to improve impact. Just 10 years ago, advertisers did not have as many communication options, so there was no need to compare different ways of reaching people with video.

advertisement

advertisement

What Counts As "Watching" TV? 

The industry standard for measuring television currently involves active metering systems that determine when someone is in the room (via a people meter) or when data models suggest someone is in the room (via set top box and smart TV data). These systems do not observe if eyes are actually looking at the screen, let alone tracking it second-by-second like digital.

Picture how many times you go into a room, switch on the TV, watch for a few minutes, and then start doing other things in the room—whether getting a glass of water or looking at your phone. People do not continuously attend to passive media like TV. This differs from interactive media, where the user is either paying attention or the device is switched off.

Even when a viewer is actively engaged in watching television, we know that two screens are often employed at once. You could be watching CNN and decide to tweet about a news story at the same time. You are engaged with the content, but distracted by a second screen. In the age of distraction, it’s important to measure how much actual watching gets done when someone is ‘watching’ television.

Before digital, distraction was less of an issue because all TV was linear; you had to watch then and there. In addition, the internet was much less distracting before the rise of social media and the deluge of content sharing. These distractions matter today.

Discerning True Value

Changing MRC guidelines to include viewability duration for video is a great step toward measuring watching on digital — and it’s time we did the same for television. Shouldn’t we put a higher value on a viewer’s attention than their proximity to a television that is switched on?

Of course, analyzing a digital screen differs from a television screen. Data giants are able to parse digital interactions to the second, while television is still judging ratings by the half-hour or hour show.

The industry agrees viewability is important, but we face challenges adapting to all screens. One thing is clear: There is a big opportunity to measure viewership by how people actually watch TV in the age of distraction.

11 comments about "It's Time To Apply Digital Video Metrics To TV Ratings".
Check to receive email when comments are posted.
  1. Ed Papazian from Media Dynamics Inc, August 31, 2017 at 9:29 a.m.

    Mark, Nielsen's peoplemeter system for national TV shows does not measure whether someone is in the room when a TV set is on but, rather, whether someone claims to be "watching" the show when it first appears on the TV screen. Beyond that, panel members are instructed to notify the system whenever they cease "watching" and reaffirm that they are watching whatever show is on when the channel is changed. Since few panel members bother to log in and out of the "audience" whenever they leave the room or engage in distracting activities---or cease paying any attention----you never get a finely articulated or "granular" measurement. Nielsen's system assumes that, in the absence of notifoications to the contrary that the claimed "program viewer" is watching every second while the show is on the screen---including the commercials. This is clearly not the case, but that's what we have to work with and, frankly, it's hard to see what else Nielsen could do except to install eye cameras in all of its panel homes, covering every set and/or screen----a monumental task, at great cost and raising many issues about cooperation rates, how the data is tabulated, etc.

    Regarding digital media, while time spent seems like a way to "measure viewing", even if this assumption was valid, one would need a different definition for every commercial length for video ads and who knows what for display ads which come in all sizes and configurations.For example, when one video ad is on the screen for 20 seconds, while another is on the screen for 10 seconds, can we assume that the former was "seen" while the latter wasn't? And if we come up with a formula---like ad-on-screen-for two-thirds- of- its- length, how do we compare this to TV? And how does Nielsen or any other data supplier process the ratings? Is a different rating for every commercial placement the solution----a gigantic data processing problem with countless permutations on a show by show and telecast by telecast basis? I suspect that by attempting to drill down to the commercial "exposure" level that we may be unwittingly creating a monster that can not be implemented and will be bypassed by most sellers and time buyers. It's a very thorny subject which I have been wrestling with for some time. Possibly an average second rating for video ads, regardless of length and positioining might be one way to circumvent the practical problems I forsee----but even here, we don't really know if anyone is watching the ads.

  2. Mark Green from TVision Insights, August 31, 2017 at 9:49 a.m.

    Ed, Agree with your discussion on Nielsen details. I worked as a global leader at Nielsen running large sections of what it used to call Measurement Sciences, before I moved into the world of scaling startups. You are correct that I am providing practical descriptions of what in my view (and experience) the measurements represent instead of a technical descriptions. Thank you for clarifiying. As you know, I would advocate introducing computer vision tech in the home. I have seen firsthand at TVision that this works. Regarding digital, agree that more work is needed. Working on it, but I think interactive media typically has more continuous involvement than passive media. Finally, I am still a big fan of TV and its power to communicate. I just think we need better measurments on which parts drive the impact to enable better experiences and more efficient leverage of its power. Best, Mark

  3. Jerome Samson from 3.14 RMG, August 31, 2017 at 11:54 a.m.

    Hi Mark - Nice write-up. Just to add one more layer to this discussion, Nielsen is really measuring 'listening' rather than 'viewing.' I know you know that, but your readers might not, and this brings a different light to the debate over what constitutes 'attention.' It also adds an interesting twist to viewability considerations for online video ads, doesn't it?

  4. Ed Papazian from Media Dynamics Inc, August 31, 2017 at 12:13 p.m.

    Jerome, I'm not sure I get what you mean by Nielsen is measuring "listening" not "viewing". You may be referring to Nielsen's PPMs, used for out-of-home TV ratings, which is true, but the basic peoplemeter in-home TV panel member is asked to indicate whether he/she is "watching" or "viewing"---I'm not sure exactly what words are used----when a program appears on the screen. In either case, the result is a highly subjective assessment, and clearly  does not refer to commercials but only to program content---at least that must be the way that most panel members interpret it.

  5. Jerome Samson from 3.14 RMG replied, August 31, 2017 at 1:42 p.m.

    Hi Ed - The Nielsen set meters work primarily by picking up audio watermarks that are embedded in TV content. They're 'listening in' on the viewing.

  6. Ed Papazian from Media Dynamics Inc, August 31, 2017 at 2:51 p.m.

    Jerome, of course. That's how Nielsen tracks the presence of commercials. But we are referring to measuring ---or trying to meassure----not whether a commercial is on the screen but whether someone is watching. There, we rely on a highly judgemental self appraisal of a Nielsen panel member, when a program first appears on the screen, who is asked to identify him/herself as "viewing" that program. Then, the system assumes that the claimed program viewer "watches" every second of content---including the commercials----unless the dial is switched or, if not, unless the panel member tells the system that he/she stopped viewing---which almost never happens. In reality,very few viewers sit in front of their TVs, eyes on the screen, paying full attention, second after second throughout the telecast.

  7. Jerome Samson from 3.14 RMG replied, August 31, 2017 at 3:35 p.m.

    Oh, absolutely. And TVision is certainly offering some elements of answer, as Mark pointed out. But it's important, I think, to remind readers that TV ratings depend currently on tracking audio cues. You can be watching TV with your eyes on the screen - if the sound is muted, it's like you're not there at all. On the other hand, you can be turning your back to the screen, and as long as the audio is on and you haven't logged off the peoplemeter, you're still counted. The audio reaching your ears is more important than the video reaching your eyes. That's what I mean by 'watching is listening.' This doesn't mean that we shouldn't strive to have both, of course! 

  8. John Grono from GAP Research, August 31, 2017 at 8:32 p.m.

    Great discussion.   Yes TV works by audio matching (either the encoded or by reference matching).   Another take is that TV excludes anyone who is watching who can't be listening as the TV is muted.

    Yes we could add 'vision tech' - and indeed I remember the US Nielsen 'face matching' system being shown here in Australia here in the early '90s.   Thankfully the hardware and software are way more advenced - but at the end of the day it was 'spooky'.   You end up getting a 'perfect' measure of a non-representative sub-group pf the population.

    On the digital side you also need to consider that on a TV you have a single screen with single content (OK, some people use screen in screen, but very few), while digital can have numerous servers all serving content and ads to browsers and apps that don't have the focus.   This is a major overstatement of usage if it is singularly measured at the server side and not the user side.

  9. Jerome Samson from 3.14 RMG replied, September 1, 2017 at 1:08 a.m.

    Right, John. Though most reputable media research companies don't rely solely on server-side metrics these days.

    Funny that you should mention the 'passive peoplemeter' from back in the early 90s. I actually ran that program at Nielsen for a couple of years back then! The 'spookiness' didn't come from the tech or the concept per se, but from the realization that if it were to be successfully deployed, it needed to be installed everywhere people watched TV. The living room? No problem. But in the bedroom or the bathroom? Not so much!

  10. John Grono from GAP Research, September 1, 2017 at 1:55 a.m.

    Jermome, my comment related to internal publisher data which way too many advertisers and agencies rely upon.

    Just this week, a local article was pubished quoting internal data for a large social media site that basically showed them reaching up to 40% more than Australia's population!

    The 'spookiness' comment was based on 'public reaction'.   We showed the passive People Meter to the industry at Taronga Park Zoo here in Sydney.   While everyone was impressed with the matching to the household member, the 'eyes on' function etc., virtually everyone said they'd never have that in their home!   A large part of it was the setup and size, and I acknowledge that the technology is now many orders of magnitude better, smaller and more accurate - so that, and the pervasiveness of connected devices, may have reduced the spookiness and increased the acceptability.

    Cheers.

  11. Jerome Samson from 3.14 RMG replied, September 1, 2017 at 11:25 a.m.

    Haha yes, I saw that. Some folks are saying there's nothing wrong with those social media figures - it's just the Australian census that's wrong! :p

Next story loading loading..