YouTube Issues Its Own Cross-Media Measurement Principles, Implies TV-Centric JIC Is A 'Silo'

Just days after a TV industry initiative to form an ad industry "JIC" released its criteria for certifying cross-media ad currencies, Google's YouTube unit this morning released its own set of principles, asserting that true cross-platform audience measurement should be platform agnostic and inclusive.

"We’re siding with marketers who need to measure reach across all platforms, regardless of the content creator, video length or camera quality. This will improve consumer ad experiences and help marketers and agencies drive much needed efficiency in their media investments," Managing Director of YouTube/Video Global Solutions Kate Alessi writes in a post this morning on Google's Ads & Commerce blog.



"We propose five principles that, when adopted, ensure solutions can meet marketer needs and improve consumer ad experience," she continues, citing:

  • Comprehensive: Measurement must provide a unified view of audiences across TV, CTV/OTT and online platforms.
  • Fair & Comparable: We should use the MRC viewable impression as the basis for counting impressions, reach and frequency, and report other metrics, such as duration, separately.
  • Privacy-Centric: Only solutions that are privacy-centric can meet consumer expectations and be durable for marketers long term.
  • Independent & Trustworthy: Solutions should be publisher agnostic, marketer-oriented, with transparent and auditable methodologies.
  • Actionable For Advertisers: We support competition and choice in measurement, while avoiding unnecessary complexity and costs for advertisers and agencies.

Alessi described its principles as "foundational" and called on the ad industry to "adopt clear principles" and utilize a principle-based approach reflecting the goals of major ad trade associations including the Association of National Advertisers and the World Federation of Advertisers in order to create a "fair ecosystem."

While the post does not explicitly attack the fledgling TV-centric JIC's -- or joint industry committee -- rush to formalize its own set of standards, Alessi writes: "No new silos. Audience is king, so measure it all on a fair and comparable basis. Don’t silo video inventory based on arbitrary concepts like production value or curation."

The TV-centric JIC, which is being led by TV industry-owned OpenAP and the Video Advertising Bureau, currently includes TV companies and ad agencies, but no advertisers and certainly not pure-play internet video services like YouTube.

24 comments about "YouTube Issues Its Own Cross-Media Measurement Principles, Implies TV-Centric JIC Is A 'Silo'".
Check to receive email when comments are posted.
  1. John Grono from GAP Research, March 8, 2023 at 5:04 p.m.

    A question.

    The first of the five principles states "Measurement must provide a unified view of audiences across TV, CTV/OTT and online platforms."

    Isn't that principle just a different silo?   Apparently video is now the only marketing medium.   And within the video content realm, shouldn't formats such as OOH digital video be included?

    Sounds like a classic case of the pot calling the kettle black.

  2. Ed Papazian from Media Dynamics Inc, March 8, 2023 at 5:54 p.m.

    John, the sole focus of all of this blathering about "cross media compatibility" is "TV" in its various forms, including videos as well as Linear TV and streaming. There is zilch interest in OOH, radio/audio or printed display advertising in any  form. "We'll get to those later", they will say---but don't hold your breath waiting for that to happen.

    As for YouTube's "principles" they are exactly the same as what the other sellers are promoting---they want "impressions" calculated as messages appearing on a screen for two or more seconds---per the IAB formula---but perish forbid that attentiveness ---which would be a real breakthrough---will be accepted as a standard metric. Horrors! If attentivenss was added advertisers could see how many---or how few----people actually watch their commercials. Nope, that kind of metric can only be used individually by sellers as add-on "currencies"---if they think it will benefit them.

    Meanwhile we wait with  baited breath to see what the ANA initiative comes up with as it is going to be operational sometime next year they have told us. My guess is that it will be just about the same as what the media seller "JIC" and YouTube  are proposing---big data panels, plus " impressions----"only please do it "transparently","fairly"  and correctly and by all means respect the panel members' "privacy". As for being advertiser -relevant, even though they may say it's so, don't bet on it.

  3. Tony Jarvis from Olympic Media Consultancy, March 8, 2023 at 6:45 p.m.

    Amen John & Ed.  The US TV/Video measurement farrago inside the imbroglio continues along with incredible ignorance of what actually constitutes a JIC.  The current "alt-currency" group being managed by OpenAP & VAB is NOT a JIC nor even close.  Ask the Europeans or the Aussies! 
    And what should a meaningful media currency - singular! - be based on?   As John, Ed and myself have frequently stated here, NOT "viewable impressions" aka "content rendered counts"(purely a device measure with no persons measurement) which represent a pathetic attempt by many media sellers to juice the numbers and reduce CPMs that have little relationship to achieving a brand campaign outcome.  Per Dentsu and Havas Media Group among others, Media and media research now live in "The Attention Economy" which has meaningful relevance for advertisers.  As a reminder it is the creative that is the primary driver of brand effects albeit with media in a synergistic supporting role. 

  4. John Grono from GAP Research, March 8, 2023 at 8:16 p.m.

    Ed.  I'd like to issue a potential health warning to you.   Holding your baited breath may not be wise.

    We have the same issue in AU, but the industry bodies for each medium basically operate under a JIC structure with the large media owners (that have the dough) and include the MFA (Media Federation od Australia) representing the media agencies and we spend the dough.

    That structure means that all the compliant media owners within each medium operate to the same standard, meaning that the media owners can use the data with confidence.

    The MFA then sits in just about all the JICs so as to ensure the best comparibility we can get.   By that I mean that a TV viewer is comparable to a radio listener, a newspaper or magazine reader, a cinema goer, an OOH driver, pedestrian, shopper etc, and someone using the internet on any sort of device (which of course includes TV, radio, press etc.).

    Then the hard bit starts.   How do we 'stitch' these 'active viewers' together to get a de-duplicated reach and frequency?   Oh, and who should pay for this tricky bit?

  5. Ed Papazian from Media Dynamics Inc, March 8, 2023 at 9:21 p.m.

    John, "deduplicating" reach and frequency across platforms---including print, radio, OOH, etc. as well as TV is, in my opinion, not much of a problem. To be honest, most media plans in The States and I suspect, elsewhere----make media mix decisions arbitrarily---without attempts to determine audience duplication, frequency patterns, ad exposure, and other variables for alternative mixes---even in those rare cases where such are evluated.  Even so,  if this was important and clients demanded such analysis---few do-----it's a relatively simple matter for a well versed media planner to determine what the combined reach of a TV schedule might be with a print media or a radio buy that is under consideration. You don't really need a monster survey that  "deduplicates" all possible combinations of schedules down to the last decimal point---the "audience" data is simply not good enough. And then there is the question of comparability. Is a TV "impression"---determined by a commercial being shown on a screen----- the same thing as  claimed issue reading or  assumed radio listening--let alone  "page views" for  digital display ads? I think not.

    The  point about audience duplication applies to "TV"as well.. This is basically a planning function---the buyers aren't tasked with it---nor are they trained for it. Again, if the planner is contemplating a mix of, say, 50 primetime GRPs per month plus 100 Daytime GRPs----both in linear TV---plus a streaming schedule of 50 GRPs, he probably knows from past esperience---expressed in his agency's R&F modules ----that the prime time portion probably attains a 35%  reach, daytime about the same and streaming comes in somewhere in the 20-25% range. Combining these is not difficult, You wind up with about a 60% reach. Since the plans are not program-specific, there is no way to tabulate data from some "big data" service to get a better estimate.  Developing such modules is a basic  media research responsibility at most large agencies. 

  6. Joshua Chasin from VideoAmp, March 9, 2023 at 3:21 p.m.

    The problem with incorporating attention into currency is that it is the role of the media vehicle to deliver an (attentive, engaged) audience to your spot-- but as the advertiser/agency, it is your job to engage them and hold that attention. Thus, while it is inarguable that attentiveness should be important to advertisers, I don't believe attention paid to the commercial is the responsibility of the media vehicle. The media vehicle leads the horse to water; the ad is what makes him drink. So I tend not to favor incorporation of ad attentiveness into currency.

    However, there is a case to be made for content attentiveness to become a component of currency. There is data to suggest that ads perform better in content that has a high degree of attentiveness. in fact, this is why the traditional TV companies are on the other side of this debate from YouTube; they invest a lot of money to create the content that entertains and informs us, and naturally they want that investment to deliver returns to their advertisers. And to get credit for those returns.   

    I think ultimately we're going to have to figure out how to quantify the impact of content environment on advertising value-- same consumer, same creative, different content environments, is there measurable differential impact? Once we understand this (or understand it better), we can start to figure out how to layer measurement thereof into currencies  


  7. Tony Jarvis from Olympic Media Consultancy, March 9, 2023 at 4:10 p.m.

    Do not disagree in principle that measurement of persons based contact and attention to media content needs to be distinguished from measurement of persons based contact and attention to the creative messages.   This would help to better understand your point regarding the distinct influence of the media vehicle carrying the creative message on the primary power of the brand message itself to drive an outcome.  Such influence could be normalized based on a meta analysis by target group, by brand category, by creative mode, by various media formats, environments, contexts, etc.  However, notably for linear TV and video streaming that raises the issue of pod position as my ad may not be adjacent to program content but to other ads and consequently their ability within the media vehicle to sustain contact and then generate attention for any subsequent ad.  Tricky!

  8. Ed Papazian from Media Dynamics Inc, March 9, 2023 at 4:56 p.m.

    It's a mistake to confuse attentiveness  as it would be used as "audience currency" with "engagement". Of course it's the advertiser's responsibility to hold a viewer's attention once the ad is first looked at and it's the ad's job to create brand positioning awareness as well as motivating the viewer in a favoravble manner.

    But that's not what this is about. All that "attentiveness" will give us is a far more accurate estimate of  a) how many program viewers were present when the commercial appeared on the screen and b) how many of those who were present actually looked at the screen while the ad message was on. In addition, we will get "dwell time" ---how much of the commercial was watched and in what sequence---start to finish,  only portions, a few seconds then out, etc. Dwell time does have some relevance regarding impact or engagement, but the first two---who is in the room and who watched for, say at least two seconds---go primarily to corrcting our raw "audience" figures---what we have been using all along except that the average commercial minute  "viewer"projections  we have calculated our CPMs on for so long are wrong.

    It's simple really. Nielsen's system is telling us that  90-95% of those it qualifies as program viewers "watch" an average commercial, while almost nobody leaves the room. That's because people meter panel members do not report their many absences from the room, nor do they notify the system every time they look elsewhere, engage in other activities, etc. With attentiveness, as executed by TVision's webcams, you will get a much lower commercial audience" figure that what we are now using---which is why the sellers are so set against attentiveness. And I can't blame them. But its time to start using a far more accurate measurement of "commercial audiences" and attentiveness gives us that. The alternative is to continue to rely on "impressions" based primarily on the commercial being played out on a screen---not that it was watched. "TV" deserves better than that and so do its advrtisers.

  9. Joshua Chasin from VideoAmp, March 9, 2023 at 5:36 p.m.

    Ed, you're 100% right about people meters; over time there is a human tendency to migrate to the behavior of doing as little as possible to comply. I'm a big advocate of passive collection of people data as opposed to active (button-pushing.) 

    Tony/Ed, I don't mean to suggest that attentiveness to commercials is not important, or that advertisers shouldn't be deeply invested in it. But I tend to think it's the domain of creative testing as opposed to currency. If I was a TV programmer (I believe that's the currently in-vogue term for "network") and I had to charge less if the audience paid less attention to the spot, that would put me in the position of not wanting to air any spots I haven't tested, and of selling my best inventory to the advertisers who would run the most attention-getting spots. 

    I will also add that big data solutions (such as those offered by my employer) enable second by second reporting. We can see audience shifts for pods and within pods; with the caveat that these are device-level and not person level shifts. however, when you look at enough of this data it really does make it clear how archaic the "average commercial minute rating" metric is. 

    I once wrote a column for Mediapost back when I was at comScore, called "the last two feet." In it I made the point that it was easier now than ever before to know precisely what machines were doing; I was writing about computers at the time, but the same goes for mobile devices, as well as for TVs. But those "last two feet" (it's further for TVs)-- from screen to face-- was where the art and science of audience measurement really became critical. (I can imagine Tony opining, "also from face to brain.")

  10. Ed Papazian from Media Dynamics Inc, March 9, 2023 at 6:30 p.m.

    Josh, a number of years ago several of the large media agencies bought person by person data from Nielsen---of course, the person was not identified---in an atttempt to explore reach and frequency patterns using "real data" not the usual formulae based on  dated  sample schedule tabs. While they were at it they also looked into the performance of commercials under various circumstances--like excessive ad clutter, show types, position in pod, etc. What they found was rather disappointing, primarily because the data they were using was in reality---set usge information-----which captured only a small part of what was probably happening with the viewers who the system assumed were "watching". In other words, the data wasn't all that sensitive  and produced very small differences between situations where one might expect much larger swings and departures.

    I'm not saying that set usage ---or knowing if a commercial is playing on a screen----isn't indicative. It is----directionally. For example,you get more channel switching avoidance if a commercial has been "seen" for several days in a row and worse if it plays even more often---like on the same day---compred to situations where there is a two -week or longer  time gap between "impressions". But the differences are muted compared to what is probably taking place when the actual  audience recognizes that it has just seen that commercial. Yes, you may get a 8-10% immediate tune out---which is a lot more than the normal rate of, say, 2-3%. But that doesn't mean that 90% of the program viewers "watched" the redundant commercial. Many---perhaps 35% ----probably left the room while another 35% looked away or did something else---used another screen, chatted with someone, ate a snack, etc.

    I don't believe that  the addition of attentiveness will change the world, nor will it  upset the sellers' apple carts. Indeed, if TV  advertisers who are being fed the inflated R & F figures that are used today understood how few consumers they are actually reaching and how rarely this is happening, rather than deserting "TV"--- which they are wedded to--they might up their spending to finally get the audiences that they have thought TV was "delivering"---rather than switching their ad spend to print media, radio or digital display ads. We have little to fear from attentiveness ----but lots to learn from it.

  11. Tony Jarvis from Olympic Media Consultancy, March 10, 2023 at 2:18 p.m.

    Josh:  You stated: ..."big data solutions (such as those offered by my employer) enable second by second reporting. We can see audience shifts for pods and within pods; with the caveat that these are device-level and not person level shifts." 
    As brilliant and respectd as you are - quite correctly in my opinion - I believe that statement makes Ed & my point. Device level data, aka "content rendered counts", or so called "viewable impressions", have neither persons based audience nor an Eyes-On/Ears-On or contact data as there is no persons measurement dimension.  And yet you claim "audience shifts"?  
    In addition and believe Ed would agree, measurement of a screen or panel for content rendered is NOT an OTS, aka a "gross impression", which has always been determined based on measures of persons in the presence of the media vehicle with an "opportunity-to-see (or hear)" actual content.  Remember your days at Simmons?  
    We do understand that for mobile ads and on-line PC ads content rendered counts are being used as an OTS proxy for these one to one media based on pirated device user consumer data.  However this data is specious for most other major media many of which have jumped on "device level data" which are generally heavily discounted by the media agencies, and indeed shouid be.  Sellers take note!  These agencies also understand that without Eyes-On/Ears-On or "contact" by the brand's target audience, preferably with "attention", there can be no campaign outcomes.  It is this "attention economy" approach to media planning, buying and selling that should be driving the measurement and therefore the currency (singular!) of every major medium.  (OOH has been there for the last 10+ years in most major countries around the world!!)
    Last and not least, if the creative message is not independently verified to meet all the Proof-of-Play specifications for the campaign as designated by the agency even device level data can be invalid.  So much for ACR and much of programmatic? 
    What am I missing?  

  12. Ed Papazian from Media Dynamics Inc, March 10, 2023 at 3:58 p.m.

    Of course I agree, Tony. The key point is that we do not have---and have never had---an indicator of opportunity to see---OTS. The system was never designed to determine whether a person who signified at the outset of a program selection that he or she was "watching" actually did so for every second that followed providing the channel wasn't switched. Of course there are on -screen prompts from time to time reminding panelists to tell the system any time they "stop watching" but as Josh points out and as I have seen, once they get acclimated very few panel members bother to follow this instruction as it would take far to much effort---imagine telling the system  each time the viewer left the room that "viewing had stopped" only to inform it a few minutes later,  "Hi, I'm back and 'watching' again". This might require ten or twenty such entries per five hour viewing day--day after day and then there are many more times when the "viewer" remained present but simply stapped paying attention. What about them?

    By the way, I'm not blaming Nielsen for this, it's the entire industry that's at fault---for demanding the impossible from a system that was only intended to measure whether a TV set tuned in a particular show and later, to find out who was "watchng". The basic point is that we do not have a measurement of the program audience just prior to a commercial break. And we know from TVision---and confirmed by other sources as reported   in our report,"Total TV Dimensions 2023" ---that about 30% of those in  the room just before the break are absent per commercial. How can we count such "absentee viewers" as having an opportunuty to see the commercial? And how many of those who started to watch the program 10 or 15 or 25 minutes ago and are included in the average commercial minute talllies  left the room some time ago and are still away?

  13. Joshua Chasin from VideoAmp, March 10, 2023 at 4:41 p.m.

    1 of 2


    You write:

    "You stated: ...'big data solutions (such as those offered by my employer) enable second by second reporting. We can see audience shifts for pods and within pods; with the caveat that these are device-level and not person level shifts.'... I believe that statement makes Ed & my point. Device level data, aka 'content rendered counts, or so called 'viewable impressions', have neither persons based audience nor an Eyes-On/Ears-On or contact data as there is no persons measurement dimension. And yet you claim 'audience shifts'?

    To be clear, I don't disagree with your an Ed's point.

    I do claim audience shifts are discernible at very granular levels from big data assets. I am reasonably certain that when sets tune away from channels, the number of people in the same rooms as those sets watching those channels can only go down. I've never shown anyone on the buy side or the sell side second by second audience flow data at the machine level and had them dismiss it as providing no insight on audience dynamics. We can debate the nuances of measuring persons versus devices, but I'm pretty sure we can agree that when the devices tuned go away, they take the persons with them.

    When I started in audience measurement, just after the last Ice Age, we measured television audiences through meter/diary integration. The meters gave us a very accurate read on what the sets were doing in panel households; but we needed to place paper diaries amongst a whole different sample of households to figure out with the people were actually doing. (And yes, obviously, this was before we had technologies to enable us to start getting at concepts such as "eyes on.") In a very real sense, we find ourselves in a similar place now. Smart TV ACR data and STB RPD give us an excellent read on what smart TVs and set top boxes boxes are doing, at a massive scale. Given the fragmentation in today's a viewing audiencea, where a 0.7 is a good rating, it is simply impossible to measure audiences accurately without that scale. Accurately, in this context, meaning robustly and reliably. We all understand that this doesn't measure people, it measures devices. Thus far that is a trade off the industry has had to make.

  14. Joshua Chasin from VideoAmp replied, March 10, 2023 at 4:42 p.m.

    2 of 2

    <p>There are two ways to deal with that trade off. One way is to use panels to understand the behavior of people, and to&nbsp;derive (or inform) demographic VPVH's that may be applied to the household/device level tuning data. (the big data is the "meter," the panel is the "diary.") In addition, panels (or other techniques for measuring the attention of persons) may be used to measure attentiveness and eyes-on. Which buyers and sellers may choose to overlay or integrate with audience counts &nbsp;</p>
    <p>The other way to deal with that trade-off, and I fear this may make your head explode, is to simply transact on households as opposed to persons. To be fair, that concept also made my head explode when I was first exposed to it. I remember telling my colleagues at my previous employer, "if I put one piece of creative on one piece of glass, is that one impression or three? That should matter." But more and more, especially in the space of data driven linear, where the target is not defined by demography but rather is an&nbsp;"advanced target," transactions are actually getting done at the household level. I believe part of the reason for this is that some of the advanced target data is in fact household level data; so the combination of measurement services that report ratings&nbsp;at the household level, and advertisers that use household level targets, has absolutely shifted some of the business in the direction of household level as opposed to persons level transactions.&nbsp;<br /><br />Personally, I remain of the opinion that I want to know if that one piece of creative on one piece of glass reached one person or three (or none.) And the new VAB "JIC" calls for both household- and person-level reporting.<br /><br />Where I think the industry is at now, is trying to figure out the best way to take advantage of these big data assets, without losing our ability to understand the way people are behaving, and to optimize the bridging of the two data types.</p>

  15. Ed Papazian from Media Dynamics Inc, March 10, 2023 at 8:02 p.m.

    Josh, what I dont get is the idea that you can target consumers ---or, to be fair, types of consumers----by using household device usage data. In my book, this applies only when you are a brand whole clientelle can be defined almost exclusively by location---by Zip code or finer---or by extreme affluence. But if you are targeting based on the life cycle of the consumer or occupation or presence of children or age or sex, and, especially, on the mindset of the consumer---whether he/she is price conscious, status conscious, convenience conscious, diet/health conscious, ad rceptive for your category, etc. etc. which is how many brand campaigns are fshioned, then your error margins using household data are going to be huge---often 50% or more. You may think that a TV show or a platform/ channel delivers the kinds of homes you are trying to reach but as often as not, you will be wrong about who in that home is viewing.

    To demonstrate this just compare the profile of the average TV viewer as depicted by household set usage and what we know  from people studies not only by Nielsen but Simmons, MRI, and many others. According to the set usage tallies, the prototypical TV viewer is more likely to be young---under age 40---- and reside in an upper income household. But the people studies all tell us the exact opposite---the typical TV viewer is older---55+---and lives in a middle to low income household.

    Both are correct in that younger/affluent homes do use their sets more frequently---because they have more residents---over three persons per home as well as more receivers, while older, low income adults usually  live in homes with one or two residents and fewer sets. Result:the average resident in a younger/affluent home is involved with only 35-40% of that home's set activities while the average resident  in an older/ low income household is the one who is watching 65-75% of the time. So set usage is not going to give you the targeting descrimination that is needed  but  it will be great for the sellers as their shows will be made to appear artificially better than they actually are for brands catering more to younger/affluent customers.

    As for attentiveness, since it has been demonstrated by TVision that a panel can be set up and such data collected, why the resistence? I understand the sellers' motivation---fear of smaller numbers---but what about the advertisers---why are they so silent?

  16. Claudio Marcus from Comcast, March 14, 2023 at 10:12 a.m.

    Media Ratings Council (MRC) definition of viewable impressions for cross-media video measurement is for an ad to be 100% in view for 2 seconds or more.

    By this definition, any YouTube video ad that is viewed for at least 2 seconds and is then intentionally skipped by the user (before the 5 second countdown that enables the user to skip the ad is active) would still be counted. Perhaps YouTube can share what percent of all  YouTube ads are skipped before and after 2 seconds.

    Facebook states that an impression is counted as the number of times an instance of an ad is on screen for the first time. Facebook acknowledges that its method of counting video impressions differs from industry standards for video ads, as its ad impressions are counted the same way for ads that contain either images or video, which means that a video is not even required to start playing for the impression to be counted.

    It is also worth noting that back in 2015 the MRC and IAB definition for a viewable video ad was if 50% of it is visible on a user’s screen for at least two consecutive seconds. However, that MRC definition was challenged by GroupM and its clients argued that for a video ad impression to count, it should be at least 50% of the ad duration fully in-view with the video player’s sound turned on throughout and video play must be user-initiated.

  17. Joshua Chasin from VideoAmp replied, March 14, 2023 at 10:40 a.m.

    Also, Claudio, it's worth noting the MRC calls for duration weighting. There's the 2 second threshold which is the minimum requirement to BE an impression. But the MRC also calls for differentiation BETWEEN impressions by duration. 

  18. Tony Jarvis from Olympic Media Consultancy, March 14, 2023 at 12:01 p.m.

    These latest comments from Claudio (and partly Josh) entirely focussed on "viewable impressions" aka "content rendered counts" underline, in my opinion, just how misguided TV/video measurement is in the US today as elucidated by Ed and myself as part of this commentary.  Its repeated emphasis ignores well established, more meaningful measurement dimensions provided by other major media, notaby print (MRI including MPX), OOH  (GeoPath including Eyes-On) and audio (Nielsen PPM - hearing).  Apparently, so much for cross media comparisons based on today's TV/video/social measurement farrago!  Josh: Surely our mentor Erwin Ephron is turning in his grave.
    Device based HH measurement takes us back 50 years even if there is independently verified Proof-of-Play of the creative message, the initial fundamental media delivery requirement.  Without a persons (in the HH) presence in the viewable/audio zone with Eyes-On/Ears-On at a minimum, or attention, there can be no campaign outcomes. 
    In addition, the inclusion by MRC of "duration weighting" to "viewable impressions", versus the significantly more meaningful "Attention seconds", made a farce out of an already deliberately misleading term "viewable impressions" - courtesy of IAB I understand.  When is MRC going to correct this regretable and abused misnomer and use, "Content Rendered Counts" rather than "Viewable impressions"?  
    Of note, Media Agencies address the relevance and consequent impact of duration/ad size and many other media attributes across all media vehicles considered in any brand campaign planning.  This includes, for example, weighting for double page spreads versus half pages for magazines, or weighting for 14' x 48' billboards versus street furniture; etc., etc., etc.  Which rasies another question for MRC.  When will MRC address other media and its outcomes measurement Standards (actually Guidelines) to ensure truely meaningful comparabliity and harmonization across all media? 

  19. John Grono from GAP Research, March 15, 2023 at 6:45 p.m.

    Wow.   I've been out of the loop for a while.

    What a great discussion.   One little thing I picked up is that in AU Nielsen provide the data to OzTAM who own and run the TV ratings.   The data collection in AU is per second so you do see movement in the panellists.   The TV ratings are the aggregation of the seconds into the reported minutes, which are then averaged to the 'program rating'.

    The 2-second rule is apt for digial video (in the main) but TV has very different content.   Would you REALLY want to count someone who watched 2 seconds of the Super Bowl as a viewer?   Maybe we should look at a proportion of the total content's duration as a more flexible and appropriate threshold (and I think it would likely need to be curvilinear).

  20. Ed Papazian from Media Dynamics Inc, March 15, 2023 at 7:52 p.m.

    John, in fairness to the IAB and its two second rule, you do need some way to characterize ad exposure that can be applied uniformly to messages of varying lengths. So I have no problem with the two second rule as they are not saying that if you watch for only two seconds that, necessarily has value to the advertiser. In reality, the average person who watches a TV commercial for two seconds keeps his/her eyes on the screen for about 45% of its content---some watch for only a few seconds, about 10% watches the message entirely and there are many gradations in between. As I'm sure you know, these depend on the demographics of the viewer, how cleverly  the commercial is fashioned, what the commercial is about, whether it's been seen by the same person recently, the amount of ad clutter in the  break, etc. etc. So it's incumbent to consider the dwell time aspect as well as the fact that viewing commenced when evaluating "attentiveness" findings.

  21. John Grono from GAP Research replied, March 15, 2023 at 10:24 p.m.

    I agree Ed.   I've been a bit crook and I don't think I explained myself clearly enough.

    Yes, I lobbied heavily with The IAB AU Measurement Council for a minimum 2 seconds for the measurement of ads.

    My comment about using a 'duration proportion' related to contentas opposed to ads.

    Most ads are in the 6-30 second range - digital ads don't have to slavishly comply to a 15 second ad having exactly the right number of frames so as to be correctly injected into the network's broadcast schedule.  So I'm comfortable with the 2 second threshold for that duration range.

    But some think that the 2-seconds should apply to all content.

    Does 2-seconds of a 1-minute YouTube video mean that it has been seen - maybe?    What about a 5-minute YT video?   Is 'at least 2-seconds' a fair threshold when you don't have to even watch 1% of the content.   'At least 2-seconds' for the Super Bowl would be 1% of 1%.

    Have I explained what I meant a bit clearer?

    The next question would be ... should ads be measured to the same thresholds of TV content?   If so, the 'ad audience' would be need to be the ratio of viewing duration/ad duration?   Heaven help us if TV regressed to a similar model so as to be measured in parity!

  22. Ed Papazian from Media Dynamics Inc, March 16, 2023 at 9:29 a.m.

    Yes, John. But the problem from a media buying point of view is that you usually don't know what percent of  your commercials will be "15s" or "30s" in advance---often this is not even known by the brands---and upfront buys cut across brands---so what do you use for "audience currency"? That's why a single definition----even if it really needs more detail to be meaningful later is required. If you had a variable definition---like 2+ seconds for "15s" and 5+ seconds for "30s" and 10+ seconds for "60s"---- you would have chaos when the commercials were eventually produced and sent to the networks, channels, etc. Your GRPs might be too many---or too few---- to accommodate the commercials.

  23. Joshua Chasin from VideoAmp, March 16, 2023 at 3:11 p.m.

    (1 of 2; when I engage with Tony Jarvis I turn into a fricking essayist.)

    The 2-second rule is a result of the phenomenon of "impressions" being counted that never made it to the screen. In the olden days of digital, a publisher could just put 25 banners all the way at the bottom of the page, and nobody cared because these banners were all sold on the basis of CPA; if no one gets that far down and no one clicks, well, no one pays and no harm done. Server counts would log these as served “impressions,” but advertisers were paying for actions, not exposures. Once advertisers started using the Internet as a branding, not a direct response, medium, paying for "impressions" that never reached the screen became problematic. Hence the development of viewability. Viewability was never intended as a measure of impression quality; rather, it was an acknowledgment of the fact that the job of the media vehicle is to get the ad onto the screen. As I noted above, while MRC calls for viewability standards to be met as a MINIMUM for an impression to count, they also call for duration weighting. I don't think anyone, even the (super smart) people at YouTube (hey Tina!), would suggest that two seconds of viewership communicates as compellingly as 30 seconds. Viewability is like a qualifying heat; duration and attentiveness are what wins the race.

    (At this point I must also note that cell phones are permanently changing the attention spans of people, and I say that as the father of a teenager. In the age of TikTok, like it or not, subsequent generations are going to have to be reached in six second spots. I don't like it any better than you do. But raise a teenager today and then tell me I'm wrong. In 30 seconds my daughter has likely watched 5 videos-- and "liked" 3 of them.)

    More to come...

  24. Joshua Chasin from VideoAmp replied, March 16, 2023 at 3:12 p.m.

    (2 of 2)

    ...As far as using device-derived data to count audiences-- honestly, that train has left the station. it is simply impossible to count campaign exposures using a panel, because of the extent to which media are fragmented. The world when a good rating was a 20 was way different than the world we live in now, where a good rating is a 0.7. And on top of that, ads don't follow content anymore. Tony might watch SVU on NBC tonight; I might watch it on Peacock tomorrow, and Claudio might watch it on Comcast VOD over the weekend. We're all watching this week's episode of SVU, but we'll all see different spot loads.

    Also, try to find a TV network company today that thinks streaming impressions should be counted off a panel, as opposed to from census data counts. And streaming today accounts for more viewing than broadcast or cable, and soon it will account for more than broadcast and cable combined. At this point, arguing against big data audience measurement is whistling past the graveyard.

    I'm sure I've said in an earlier post, I tend to fixate narrowly on the realm of currency measurement, because that's what I do for a living. In this day and age, if you remove big data assets from the equation, buyers and sellers simply can't transact. So now the question becomes, how do we take this device data and map it to households and persons? The household part is easy; if companies like mine can't map smart TVs or set top boxes to specific households, those devices don't make in-tab (our in-tab footprint is comprised of households, not sets or devices. Indeed, households for which we have rosters with demography associated with each person.) (Yes, I know all about error in individual identity partners. We're on it.) As far as knowing who in the household is watching, this can be done through big data, data science, and a panel serving as a training set. In fact driving such "personification" is one of the best use cases of a panel today. Digital census data can also be mapped through devices to households, with demography probabilistically assigned.

    Thanks to the science of attribution, we can quantify the effectiveness of campaigns planned and bought using measurement based on large device-level data assets, compared to campaigns planned and bought using what I'll call old school measurement methods. I have every confidence that such an exercise would demonstrate the improvement in efficacy of big data solutions.

Next story loading loading..