Commentary

TV's Average Minute Audience: Here To Stay?

Hello, average-minute audience for digital TV platforms. It’s nice that you joined the TV party. Will you be hanging around?

Comparing traditional TV with digital media platforms -- or wherever traditional TV wants to add results from similar programming on digital platforms -- is getting better. Thanks mostly to like-to-like average-minute-audience measures.

We now have NBC showing us -- via their own “Total Audience Delivery” measures -- how much digital media has contributed to its traditional TV results. For example, for the Winter Olympics opening ceremony, NBC said it earned an average-minute audience of 440,000 from all its digital platforms  -- out of a 27.3 million total viewers in average-minute audience.

Sure, it’s tiny. 

Now, let’s consider the highly touted, highly analyzed Netflix. Last week, Nielsen continued to offered some selective results as to how Netflix's “The Cloverfield Paradox” movie and new TV series “Altered Carbon” performed.

advertisement

advertisement

For “Cloverfield,” after an initial three days of viewing -- somewhat akin to traditional TV network’s live program plus three days of time shifting -- the movie in pulled 2.8 million viewers, when looking at average-minute audience. Its first seven days saw 5 million.

“Altered Carbon” posted 1.2 million -- its average-minute audience -- when considering all its episodes, for the first three days. (Remember, Netflix typically releases a full season's worth of episodes of a TV series.) After seven days, “Altered Carbon” recorded an average-minute audience of 2.5 million.

Again, it’s tough to make like-to-like comparisons to an individual show on NBC, CBS, TNT, AMC, FX or others. But it gives us a rough indication.

Things are still far from perfect. Third-party measurers are still working out the bugs; Nielsen is still looking to get all TV networks to install software on their respective digital video platforms for precise metrics.

In the case of NBC, TV marketers now may have a better clue about viewers watching on linear TV (broadcast and cable) and digital platforms. Netflix? The data may be somewhat less to consider -- since it doesn’t take advertising.

Trouble is, TV marketers have -- for the most part -- moved on. They want ROI results as it relates to whether consumers are buying their products/services. Have they seen, engaged, or been moved by some advertising messaging -- on a particular network, at a specific time/daypart/ program?

Yes and no — and maybe the jury is still out -- especially when it comes to any kind of standards and benchmarks for comparison. But we’ll also take what we can get -- on average -- relatively speaking.

8 comments about "TV's Average Minute Audience: Here To Stay?".
Check to receive email when comments are posted.
  1. Ed Papazian from Media Dynamics Inc, February 20, 2018 at 10:17 a.m.

    Wayne, we should realize that Nielsen's "commercial minute audience"does not apply to a given 15- or 30-second ad message that happens to fall within the one-minute timeframe. It's probably higher than either one. More important, all that Nielsen knows is that a set was tuned in while a particular commercial or series of commercials was on the screen---but not that the self-proclaimed "program viewer"---who made this claim when the channel was first tuned in---- actually watched a specific commercial or was even in the room. One might try to deal with this by demanding that Nielsen produce commercial-specific ratings ---which would provide a huge amount of data, much of which would not be particularly well differentiated----but what does one compare such findings to when evaluating digital media? There are some who advocate that time on screen nmight be used but how do you interpret such data? If a 15-second video commercial is on a laptop's screen for two seconds while the same ad might have been seen for eight seconds on a tablet, does this make the latter four times more "effective" that the former? And will an advertiser who has created a 15-second message pay anything for a digital user who may or may not have watched the commercial that was on the screen for only two seconds---or eight seconds----- I would hope not. As I've said before, this is a very tricky subject and we include a fairly comprehensive article on how it might be approached in our upcoming reoprt, "Intermedia Dimensions 2018", for those who really care about finding a workable solution.

  2. Justin Poe from iSpot.tv, February 20, 2018 at 1:01 p.m.

    Good points, Wayne and Ed.  In my opinion, the best way to approach this is to actually measure the commercials themselves and then compare how much viewers are paying attention to ads on different platforms as well as measure if they took a specific action based on them.  That's our approach at iSpot.tv and we're able measure all of these areas in real time.  

  3. John Grono from GAP Research, February 20, 2018 at 5:39 p.m.

    Ed I am less pessimistic about 'average minute ratings' as a surrogate for commercial ratings.

    Many years ago I took a week of viewing in Sydney for the commercial broadcast stations (and yes I realise I didn't include cable or publicly-funded) and married the data to the broadcast log files.   I couldn't identify which ads were broadcast - what I knew was at those times it wasn't programme content that 'people were watching'.

    More importantly PVRs were not prevalent then so time-shifted viewing (the big ad-avoider) wasn't the big factor it is now.

    The thing I noticed is that I could inspect the raw data and easily predict when the ad-breaks were.   I saw lots of channel changing (content chasing/ad avoiding) and I also saw lots of 'set audience' change - getting a cuppa, loo breaks etc.   That is, 'the system' was picking up channel switching and leaving the room.   Whether everyone who left the room logged out is an unknown, but the rate was higher than I had anticipated (i.e. compliance seemed good).   The compliance results also virtually matched the 'spot checks that the ratings company do.

    In round figures the ad-breaks averaged a single-decline.   This ranged from low single-digit for first and last in break, up to high single-digit for the middle of the ad-break, which all makes sense.

    I suspect that with PVRs the rate would now be higher - probably double would be my unscientific guess.

    Of course, this didn't pick up people who 'averted their gaze', were chatting, were reading etc.   Also the TV ratings system relies on audio matching (here in Australia) so no sound = no contribution to ratings.

    But we need to look at the 'purpose' of TV ratings.   They are basically to establish the number of people who watch a programme in the average minute - which includes the lower rating ad-breaks.   In essence they are saying, if history repeats itself this is our best estimate for next week (albeit events and competitive programming can change).

    Sure the ad-breaks could be done with some more rigour, but the cost of that would be enormous.   Name me one advertiser who would pay.

    But surely, the way to get better 'ad ratings' is to make better ads!

  4. John Grono from GAP Research, February 20, 2018 at 5:45 p.m.

    Justin I am interested in how you measure 'ad attention' across the population/viewing population.

    Also what 'specific actions' are you measuring, and how you can be sure that the action was caused by the ad?

    Of course across the viewing population there will be a wide variety of types and levels of attention as well as a plethora of actions.   Do you boil this down to an average?   And how do you dissociate correlation from causation - what sort of ;ongitudinal multivariate media analysis are you using?

  5. Ed Papazian from Media Dynamics Inc, February 20, 2018 at 6:15 p.m.

    John, my problem with attempts to validate" commercial minute viewing" estimates via telephone coincidentals is that even though you ask the person , in effect, "Were you just watching TV?" and correlate this to whether a commercial was on the screen, the respondent most likely thinks that you are referring to program content, not the ads. So you get a huge inflation in "viewing just now" responses which creates the illusion that you are measuring commercial audiences. In my latest report, "TV Dimensions 2018", we describe the results of a fair number of observational studies---including several brand new ones--- showing a huge drop off in "eyes-on- screen" ratios when ads appear which is not found in the "validation" studies. Also, there seem to be wide differences in viewer attentiveness from one ad to the other---which you don't see in the "were you just watching" studies. TVision now has a panel of 2000 homes whose viewing---second by second---is being tracked via "eye cameras" and they, too note similar variations---which is what one would expect.

    Regarding the validation studies, I don't have a solution. If you ask the respondent whether he/she was "just watching a commercial", I would expect many who actually did watch to deny this---again, creating a misleading impression.

  6. John Grono from GAP Research replied, February 20, 2018 at 6:43 p.m.

    Ed, the way we ran the telephone co-incidentals was "When the telephone rang ..."   The call is time-stamped and the claimed usage is matched/compared to the metered usage based on the 'exact' time (allowing for latency issues).   The matching is done device by device and person by person.

    The respondent is asked about channel and programme name - not the ad (and who in the household is viewing).   The ad content is derived from the metering and station ad-logs at the second-by-second level.   That is the issue of recalling which specific ad is avoided.

    Of course this is not measuring 'ad recall' as we know it, but is measuring 'ad exposure' ... so your bailiwick of the recall still exists.   But I come back to the purpose.   This it to quantify the quantum of people who were watching a programme who (i) accurately co-operated (ii) from which can be derived the proportion who also hung around for the ads.

    'taint perfect, but a reasonable indicator.   And more representative than on-line 'plays' are of ad exposue and recall. 

  7. Ed Papazian from Media Dynamics Inc, February 21, 2018 at 4:18 a.m.

    John, I believe that we are kidding ourselves in thinking that we can measure average minute viewing---as opposed to set usage----of either program content or commercials with the peoplemeter. In the latter case, aside from cameras and other observational indicators we have a vast amount of commercial recall scores which tell us there is tremendous variation by message in claimed and "verified" exposure which is not reflected by the peoplemeter results. As for program content, the same is true----but to a lesser extent. I have studied many attentiveness measures and these indicate much more variability than the peoplemeters reveal within a particular program installment. Also, TVision, appears to show the same thing.

    The basic assumption underlying the peoplemeter system is that once a person claims to be "watching" a TV show---invariably at the outset of the tune in----that this remains in force for every second afterwards--unless the "viewer" indicates otherwise. As such indications almost never happen, the "viewer" is assumed to be watching every second when, in fact, up to 10% may leave the room at a given point in the telecast while attentiveness levels move up and down---often to the point of not even looking at the screen-----depending what is transpiring in the show and whether the "viewer" is interrupted by someone or some activity. Let's face it, the peoplemeter system was merely an attempt to replace handwritten diaries with an electric surrogate that would require compliance whenever the set was tuned to a given channel, as opposed to being filled out, by memory, in a diary days later. It was never intended to provide a defensible average minute "viewing" estimate. 

  8. nerd rage from Nerdrage Inc., February 21, 2018 at 1:53 p.m.

    Why drag Netflix into this at all (unless it's for the brand name that Nielsens wants to use to get attention)? Netflix doesn't compete for ad dollars. Facebook and Google do. That's the comparion Nielsens should be making.

    Nielsens' data has all sorts of problems. It doesn't measure mobile, it only measures domestic (more than half of Netflix viewers are overseas and that proportion is growing), there's no reason to believe there has yet been enough time even to assess ratings (I'm currently up to episode two of Altered Carbon but Nielsens thinks everyone who will watch this show has watched this show already? How do they know even 1% of the eventual viewership has watched?) and most importantly, it doesn't measure the value of these shows/movies to Netflix.

    Netflix wants to gain new subscribers and retain old ones. What if the mere act of putting a show or movie in your queue correlates to retention, and you don't even need to watch them because you are reluctant to end your subscription as long as there are things to watch? Now I doubt that's the only reason to continue subscribing, but there's some deterrent effect to merely having an item in your queue. A person with two tiems in their queue is more likely to cancel than someone with 50 items. Nielsens can't measure queue-building, only viewing.

    Gaining new subscribers is of coruse more valuable than "merely" retaining subscribers since it's mroe difficult. I'm sure there are shows and movies that are better at gaining new subscribers than others. You can sort of suss this out by seeing where Netflix is being apparently irrational. The Crown is expensive and doesn't appear to have a huge viewership. But what if it's disproprotionately good at bringing in new subscribers? Netflix seeme to be big on teen dramas lately. Do those bring in new subscribers? Nielsens can't make that distinction.

    Bottom line, ignore Nielsens when they are talking about ad-free streaming. Its apples and oranges, or maybe apples and orangutans.

Next story loading loading..