Commentary

C3 Or C7: Does It Really Matter?

Often times, new and ostensibly improved research makes people think they are getting closer to the “truth,” when they are actually getting farther away. C3 is a good example. 

Those of us who were involved in the initial industry meetings that led to C3 as a compromise between buyers who wanted Live only and sellers who wanted C7, remember that Nielsen did very little research into the methodology it decided to use in calculating this new metric.

Part of the reason was that the upfront was approaching. The industry needed to deal with the growing impact of time-shifted viewing, and C3 was designed as a one-year transition (two at most) until the industry post-buy systems, Donovan and Datatech (since merged to form Media Ocean), were able to report and post on exact minute ratings.

I wrote about how C3 did not take fast-forwarding through commercials into account, which, at the time, Nielsen acknowledged. Most people in the industry should have been fully aware. But again, since C3 was planned as a temporary move, and DVR penetration was barely 20% of TV homes, it wasn’t a major priority to fix this.

advertisement

advertisement

I remember saying at the time that once C3 was implemented, it would become the standard metric, and it would be difficult to move to something else, since neither agencies or networks really wanted to deal with the nightmare workload involved in actually switching to minute-by-minute ratings on a massive scale – particularly after dealing with the one-year nightmare involved in just switching to C3 (anyone who worked in an agency TV research department back then knows what I mean).

I’ve been part of a few industry meetings since, then where the topic of individual commercial or commercial pod ratings (my choice) were discussed. But there never seems to be any actionable follow-up.

Those of us who remember when cable was posted based on quarter-hour program ratings should realize that this data was actually closer to commercial ratings than are C3 ratings. That's simply as a result of the way programs build or lose audiences over the course of a telecast. 

C3, which merely weight-averages minutes that contain commercials based on how many seconds of commercial time are contained in each minute, is not necessarily close to any individual commercial rating.

Over the past few years, for some reason, a lot of people started to think that C3 accounts for fast-forwarding through commercials.  It doesn’t.  This has to do with the way Nielsen measures the average minute.  Basically, it looks for the plurality (not the majority) of time viewed to determine whether a channel is counted as being viewed for that minute.  In other words, if during a given program minute, you are tuned to ABC for 20 seconds, and four other networks for 10 seconds each, you are counted by Nielsen as watching ABC for the full minute. 

When you are playing back something on DVR, however, there is only one channel. So any tuning to that channel, even just a few seconds, is a plurality, and will count as viewing the entire minute.  This will typically only impact the first and last minute of a commercial pod, but it obviously leads to C3 counting a significant amount of fast-forwarding as commercial viewing.

Switching to C7 would not change much in terms of how far away we are from actual commercial measurement, and since it gives a better indication of overall program viewing, there’s no reason not to go with C7.  Any research analysis of time-shifted viewing should be done on a full-week basis anyway. 

The difference between C3 and C7 ratings for most cable networks is insignificant.  It is greater for the broadcast networks, which have a lot more original scripted series, and therefore much more time-shifted viewing. 

20 comments about "C3 Or C7: Does It Really Matter? ".
Check to receive email when comments are posted.
  1. Ed Papazian from Media Dynamics Inc, March 18, 2015 at 9:56 a.m.

    Interesting points, Steve. I wonder what percent of the time the hypothetical scenario you described---or something like it ---with one network being "viewed" for 20 seconds while four others were "viewed" for 10 seconds each during a commercial minute, applies. In such a case, I would think that no network should be credited with commercial viewing, in the example you outlined, not the one with the most seconds. Indeed, it may be time for Nielsen to revisit its tabulation process for commercial minute "viewing" entirely. Why not a minimum time limit of, say 30 seconds to qualify? If nobody attains that, discount all of the audience as not reached. Of course, the sellers won't like that but isn't it time that buyers stopped compromising and demanded a more realistic metric?

  2. Ed Papazian from Media Dynamics Inc, March 18, 2015 at 10:11 a.m.

    I should add, as a first step, it would be important to get from Nielsen a tabulation showing what percentage of the time for commercial minute "viewers" does one channel get 30 seconds or more of the "viewing compared to situations where no single channel "dominates the "viewer's" time. This should be done for all dayparts and network types, not just broadcast network prime. If it materializes that 85-90% of the time one channel does, in fact get 30 or more seconds of commercial "viewing", then the problem may not be as serious as is imagined. If, however, the situation is much more fragmented, it may well be that no commercial is getting "seen" to a sufficient degree to be counted as seen on any channel. I suspect that the extent of such fragmentation, due to frantic dial switching, is relatively minor on a minute by minute basis----but I'd like to see some real data on this.

  3. Steve Sternberg from The Sternberg Report, March 18, 2015 at 11:14 a.m.

    Interesting Ed. At the time, I suggested that 30 seconds of commercial time in a minute be the threshold for counting a commercial minute, but I was out-voted.

  4. John Grono from GAP Research, March 18, 2015 at 5:13 p.m.

    Ed, just some thoughts and comments. First, here in Australia we use a slightly different method of 'crediting the minute'. We allocate to channel to the "middle of the minute" - the channel being viewed at the 30th second. Probabilistically it makes a fair degree of sense, but like many things is not perfect. But we also need to be aware of things like how accurate are the actual time stamps? How much drift is there in the meter clock? How often are the clocks recalibrated? What latency issues are there with re-transmission? Here in Australia it is around 3 seconds on our biggest cable service when it re-broadcasts the FTA transmissions, and I have heard it can be as high as 7 seconds for satellite services, I think your "30-second" rule is pretty harsh but I understand where you are coming from. Let's say that in a programme with an audience of 5 million, that everyone does a little ad-break surfing and doesn't settle on any channel for 30 seconds (continuous or cumulative by the way). We would have the situation of the ratings going from 5 million to zero then back up to 5 million. Truth be known the majority of people would still be viewing and you will have fouled up you average minute TV viewing data. Also, truth be known it is possible - if not highly likely - that ad-break surfers could be (legitimately) exposed to more than one ad in an ad-break minute. We run a lot of 15-second ads here in Australia. It is possible that someone could watch all 4 of those 15 second ads in a minute and because of the 30-second rule not be credited at all. That then becomes a known false negative. There is a LOT more thinking required on this one.

  5. Ed Papazian from Media Dynamics Inc, March 18, 2015 at 6:57 p.m.

    John, I agree that this is a complicated issue and is made more difficult by the varied mix of "30s" and "15s" in each break. So far, the issue has been settled, largely as a political compromise between sellers and buyers so each side is "comfortable" with the result, rather than as a pure let's do it right determination. As I noted, and you have also mentioned, elsewhere, the actual incidence of dial switching during a typical commercial minute is probably very low---perhaps only 5-7%---but I'd like to see the kind of analysis I suggested earlier in this thread from Nielsen to get a better handle on variations by daypart and network type. That's why it may be more practical to form a common sense judgement rather than propose the kind of fine tuned and expensive research needed to get a more precise solution. Not that I'm against it, but. rather, as a matter of priorities. Also, I'm sorry to say, that I doubt that the buyers and sellers will support such research.

  6. Tony Jarvis from Olympic Media Consultancy, March 19, 2015 at 5:34 p.m.

    First a huge thank-you to Steve for making the current Nielsen measurement practice regarding C3/C7 and the lack of accountability it provides crystal clear. While I applaud the collective thinking on yet another Gordian knot for the media measurement industry (it's always connected to the edit rules?) we should not forget the loss of TV ad exposure for those who are not frantically switching channels but have merely taken the dog for a walk or are making a nice cup of tea and yet are still counted in the Nielsen audience. Panel compliance with meter "procedures" are poor at best notably I believe in off prime time (Steve?).
    This discussion is particularly intriguing in view of "our" recent dialogue regarding cross platform measurement and the need for equivalent or common currencies across channels at the exposure or "Eyes-On" level to the media vehicle (when the commercials are running or placed) which has huge definitional, measurement and editing implications if we want to achieve meaningful intra or inter media analysis. Ed correctly referred to such analysis as, "... the current practice of making arbitrary media mix decisions...".
    So here we are full circle in attempting to address true presence in the viewability or audio zone and what defines "exposure" that is fairly comparable by media channel (versus ad effect a completely different measure). If network TV is still at least the 600lb gorilla and cannot "get measurement right" we might ask when are the advertisers and agencies going to stop the TV measurement "nonsense" which in turn takes to the appalling measurement of Spot TV in the US!

  7. John Grono from GAP Research, March 19, 2015 at 5:55 p.m.

    Tony, I have just seen some compliance data regarding button pushing. You are correct that people are leaving the room and not pushing the button. But what was surprising was that this was almost exactly counter-balanced by people in the room and had not pushed the button! What you lost on the swing you won on the round-about.

  8. Tony Jarvis from Olympic Media Consultancy, March 19, 2015 at 6:06 p.m.

    John: Unfortunately the "swings and round-abouts" analogy may not be well understood here. (As an emigrated Brit fully understood though!) However, please confirm that the profiles of the difference compliance groups were at least similar. If not, "Houston we have problem!" Steve can you throw light on this potential balancing phenomenon here in the US? Or would you rather discuss the merits of roundabouts (or circles) to keeping traffic moving? Cheers

  9. Ed Papazian from Media Dynamics Inc, March 19, 2015 at 6:20 p.m.

    John, a number of studies in the U.S. by Nielsen have shown that the average discrepancy between the peoplemeters and what real time telephone calls to the same people reveal is about 10-15% on the plus side and the same on the minus side. But this refers to program content, not commercials. Something like this is probably what you are referring to. No matter what one thinks of this kind of "validation", the biggest change that takes place, by far, when commercials appear is a decline in attention, followed by leaving the room and then dial switching. Were a true "eyes on" measurement available for audiences when program content and commercials were on the TV screen, the latter would inevitably "lose" many of those the system credits as "viewers".Our annual, "TV Dimensions 2015", cites a fair number of camera and other observational studies that demonstrate this point. Unfortunately, many of these are quite old and badly need updating------- nevertheless, the directional implications are clear. Commercials, as a genre, are less interesting than program content and the peoplemeter system is not geared to drawing this critical distinction.

  10. John Grono from GAP Research, March 19, 2015 at 7:07 p.m.

    Tony, I don't have the detailed data at a demographic level. What we see is higher compliance in the older demos and poorer in the younger demos (as you would expect). I will try and get either data or assurances that the counter-balancing is across the board.
    Ed, the study is an 'across prime-time' study (constrained by CATI calling regulations I suspect) so given that we have 13 minutes in the hour of ads the distribution should be normally distributed so I would expect around 80% of coincidentals to be programme minutes and 20% ad minutes. Before you ask - I don't have the data at that level.
    Of course the best solution is ... to make better ads! I have a problem with a broadcaster being penalised (e.g. by having to provide compensation for a bad ad that virtually makes people grab the remote or run screaming from the room - apologies for the hyperbole), and I represent the media agencies and advertisers interests not the broadcasters!

  11. Ed Papazian from Media Dynamics Inc, March 20, 2015 at 5:35 a.m.

    I doubt that the findings in these telephone based "validation" studies can accurately reflect the distinction between commercial and program content viewing. When the caller asks the peoplemeter panel respondent , "What were you watching" just now?", and a commercial happens to be on, the respondent is probably thinking about the program, not the ad. Even if the caller knew that an ad was on and changed the question to, "Were you watching the Brand X commercial just now?", this might cause many, who actually saw the commercial, to deny said exposure so as to impress the interviewer and show themselves in a better light. After all, who watches commercials? The only way to really get at commercial "viewing" is a telephone recall study, probing for content playback, combined with a camera study which notes whether the peoplemeter "viewer's" eyes were on the screen or whether the viewer left the room or became visually distracted. So far, such an investigation has not been attempted ----as far as I know---and I despair of it ever happening.

  12. John Grono from GAP Research, March 20, 2015 at 9:08 a.m.

    Ed, the procedure is to telephone the household so the timestamp is accurately known. The person on the phone is asked (I) who was watching a television (ii) what channel was it on (iii) which set was it on. This is then compared back to the peoplemeter records at a secon-by-second level for 'matching' (within a small tolerance). This is then aligned to reference signal recordings of the broadcast, again second-by-second. The primary purpose is to ensure that the TV usage levels are acceptably reported, rather than at a channel of programme level. The point is, at the macro level the data is better than I would have expected. It would also be technically possible to analyse by programme content vs ad content - but that is not the sample design. Sure it isn't perfect (what research is), but it seems better than anything else thrown up. The problem is it is a 'point in time' analysis (conducted at regular intervals), but is indicative only and couldn't be used as 'currency'. Again, it comes down to ... who wants to fund this research for currency? Just one other thing I noticed while the discussion revolves around the 'value' of the audience delivered. TV is judged by CPM (cost-per-thousand). I saw some Australian online video data yesterday and it was looking at average CPV (cost-per-view). The thing that struck me was that the online CPV was roughly a third that of TVs CPM. That makes online video several orders of magnitude more expensive. And we still don't know a great deal about the efficacy of online viewability.

  13. Ron Levy from Ron Levy Associates, LLC, March 20, 2015 at 9:51 a.m.

    How true, Steve. As soon as respondent-level data was available we at Datatech offered an exact minute post. It had all of the precision issues mentioned above, yet our agency clients were excited by the availability. Nevertheless it got almost no usage for about 2 years until, post-acquisition by MediaBank, one of our clients began using the feature for one account.

  14. Tony Jarvis from Olympic Media Consultancy, March 20, 2015 at 2:17 p.m.

    Ron: I would suggest that agencies are aware of the imprecision of Nielsen's network C3 or C7 measurement process which would make inferences based on minute by minute "problematic" for isolating the audience for their client's brand ad. In addition, per this discourse, planning/buying surely needs to take place based on the aggregate position of the ad in the spot rotation, I.e., for the pod? O top of this consdieration at the minute by minute level how much is rating media driven verus creative/brand equity driven? I would agree with Jon that the media cannot be held responsible for the drawing power of the creative.
    Jon: The MRC and iab have established stict viewability processes, procedures, measurement and threshold levels for all the various on-line screens (minimum 50%) here in the US. Many advertisers and their agencies are now requiring 100% viewable from their metrics vendors which unfortunately begs the questions we are raising for TV is anyone there and even if the total screen is "viewable" for many seconds (dwell time) does the ad earn an Eyes or Ears -On? Perhaps Josh Chasin of comScore wants to chime in after his stimulating piece regarding cross platform measurement?

  15. John Grono from GAP Research, March 20, 2015 at 7 p.m.

    PART 1. Hi Tony. Australia is in lock-step with the US IAB on viewability (50% min pixels, 2 second minimum). But we must remember that TV is 100% pixels for the entire duration viewed., We must also remember that the TV ad the client buys gets 100% of the screen pixels whereas online video gets some lesser unknown proportion. We are for TV (correctly) querying that of the audience estimate reported, what proportion actually stays in the room and watches the ad. We are then also querying what attention/engagement/recall (or whatever your preferred metric is) is achieved. These should all be queried, They may also not all be able to be measured. The topline model we use here in Australia is that we estimate the likely viewing audience based on the previous 4 weeks of the programme average. We don't know what our competition will be (we might be up against a new blockbuster). We don't know what break-in-programme we will get, but as an agency we have negotiated for our clients collectively an agency wide proportion. We don't know which position-in-break we will get but again have negotiated an agency wide proportion for first-in-break and last-in-break. What we do know from the elemental data is what the average ad-break rating reduction is likely to be, by analysing the minute-by-minute data for the past recent weeks and can then apply that to the forward programme estimates. This needs to be done based on 'live + as live' ratings and 'time-shifted' ratings. In broad terms the overnight ratings ad-break ratings reduce by 6%-10% (varies by demo and programme type), whereas the time-shifted ratings are likely to reduce by 80%-90%. That is we do have a lot broad averages that we can and do apply. We do not know whether the second-by-second data capture and button-pushing in the home is 100% perfect. Actually, we know it is NOT perfect from the coincidental studies. But from them we also know that (and I can't provide the actual data due to NDA) when you analyse the proportion NOT properly logged in, that those claiming to be viewing when called but not logged in is virtually equal to the proportion claiming to be not viewing but are logged in. That is the total quantum of viewing is a good estimate - the programme and ad proportions of that estimate won't be quite as good, but as it will be a net sum zero, that over time it will even itself out. In essence, with the research funds we have, and without over-burdening the panellists, we have a reliable proxy.

  16. John Grono from GAP Research, March 20, 2015 at 7 p.m.

    PART 2. This is in distinct contrast to the online world. 'Viewability' is defined under the 50%/2-second rule. I prefer to call it 'Visibility'. That is the ad is technically visible if it passes that hurdle. But there is an inherent assumption that on that browser tab, that there is a person there ready to watch when the ad is served. There is also the inherent assumption that the person's gaze will go to that sub-section of the screen to gain attention, which is probably OK as we as humans have a limbic attraction to colour and movement (and sound if it is on). But there is also an inherent assumption that that tab in that browser is in focus. Many people have multiple tabs open at the same time (I have 11 tabs open at the moment). Each of those tabs are being served 'push' ads (remember when online was heralded as the 'pull' medium - it is for content but not for ads) that while passing the 'viewability' test do not pass the 'visibility' test of browser tab in focus. There is also the inherent assumption that the browser itself is the only browser open - I have two open, one in 'private' mode in order to bypass a paywall, an increasingly common practice. Finally it also assumes that the browser has focus on the computer and is not minimised. So, do this little test. Yes your browser must be open now as you are reading this post. But how many other tabs are open in that browser? Do you have another browser also open? Was your browser open but minimised before you started reading, and/or will you minimise it with, say, your home page or favourite news site open, when you finish reading? Now, is your TV on in another room with no-one in that room? What I am seeking first is comparability, so that in order to do cross-platform video measurement we are looking at roughly comparable data that adequately reflect the 'likely' viewing audience. Once we have that, we can then go back and increase the precision of those data. I believe that at this point in time the greater and more urgent need is for acceptable cross-platform video (and audio) measurement, than turning up the microscope on a workable proxy. Re-calibrating that proxy comes later when we have the funds and methods to build an acceptable and affordable currency for it. Having said "the funds" ... that will REALLY test the market's appetite for increased precision.

  17. Ed Papazian from Media Dynamics Inc, March 20, 2015 at 8:08 p.m.

    I think that several words culled from the title of this piece and Tony's comments are worth noting. Steve asked whether "it really mattered" and Tony pointed out that the agencies were "aware" of the limitations of the current TV rating system. So, in that context, John makes good points about where we are at present, regarding comparability across platforms. My problem with all of this is the suspicion that even if we attain the demanded degree of technical comparability in terms of "visibility" ----or opportunity to see-----we will have an inflated measurement that is not equally inflated for each platform, let alone, content subsets within platforms. However, being a realist and a long time observer of human behavior, I suspect that the end result will probably be that nobody sees the need to go deeper and really measure commercial viewing, ad impact, etc. as there are so many variables, each campaign is unique, etc., etc. Thus, in the long run, nothing is gained. I hope that I'm wrong, but......?

  18. Steve Sternberg from The Sternberg Report, March 21, 2015 at 7:36 p.m.

    I think part of the problem is that while ost heads of research groups at agencies are aware of C3 limitations, most buyers and sellers are not. It's relatively easy to determine how much fast-forwarding is being missed by C3, C7. I suggested how it could be done when I was on the Council for Research Excellence, but Nielsen was not at all interested at the time. Again, that could just be because we all though C3 was going away in a year or two.

  19. Steve Sternberg from The Sternberg Report, March 21, 2015 at 7:37 p.m.

    Ron, you are correct. I remember working with you when I was at Magna to get it done. We then worked with Donovan the next year to get them to be able to post on exact minute ratings.

  20. Tony Jarvis from Olympic Media Consultancy, March 22, 2015 at 4:38 p.m.

    Terrific dialogue everyone. Shall we simply say be very careful doing very precise things with very imprecise data? Cheers

Next story loading loading..