Duration-weighted impressions could be the most significant change in the underlying currency used by advertisers and agencies to value ad buys across media, but the concept has relatively little awareness and/or understanding on Madison Avenue.
Only a third of ad executives interviewed by Advertiser Perceptions last month said they were even aware of the new metric, which will become the industry standard for evaluating cross-media advertising campaigns, effective in 2021.
In March, the Media Rating Council said it was delaying the duration weighting until 2021 to give the industry time to adjust to the new metric. At that time, MRC CEO and Executive Director George Ivie told MediaPost that request to delay its rollout was coming mostly from the “demand-side,” meaning advertisers and/or agencies.
advertisement
advertisement
Interestingly, even among the 33% of ad execs who said they were aware of duration-weighted impressions, only a third of them said they understood the concept very well.
In a nutshell, the method uses a common denominator of time for measuring the impressions value of viewable video ad delivery across media media, including TV, online and mobile screens.
Some supply-side stakeholders have also expressed concerns about the change, implying that it might not be the most equitable way of assigning impression value across screens, but have been reluctant to suggest an alternative method.
Asked on an open-ended basis what they believe is the best “denominator” to use for comparing ad impressions across media, 77% of ad execs responding to the Advertiser Perceptions study cited “time” (see open-ended responses below).
Strange? Many of the respondents say they don't understand the "standard"---or are fuzzy about it---yet most agree that it's the best way to go?Also, do we really expect non-media people to have even the slightest knowledge about this---let alone any interest?
Duration weighting is a VERY BAD IDEA. It assumes the effectiveness of media is directly related to the length of time someone is watching a video ad. that is a profoundly bad assumption as there are studies that repeatedly show that a 6" video ad can out-perform a 15" ad. Furthermore, even on TV 30s are not twice as effective as 15s. So what problem are we trying to fix and only making it worse?
Joel, these are no studies of specific "6s" that demonstrate that they significantly outperform longer commercials for the same brands in terms of selling power that I've seen.. That's probably one reason why there are virtually none of them on "linear TV".
Very few media and and fewer ad execs understand equivalized base :30 unit lengths which have been around for decades. I can only imagine the arithmetic insanity which will accompany this additional recalibration metric.
I find it extraordinary that so many advertising executives are blind or ignorant to 'duration weighting' which is a fundamental tenet of media research and has been for many, many decades.
Starting with TV as an example, we need duration weighted programme ratings because that is the best (simple) estimate of how many people will have the opportunity to see the ad. Take a programme that has a reach of 8 million viewers - i.e. they watched at least a minute of the programme. But if its rating, the average minute audience (i.e. duration weighted), is 4 million viewers then if you used the 8 million audience you have a 50-50 chnace of your ad being seen (and Ed I am talking about Opportunity To See, as opposed to Likelihood To See which is heavily dependent on the quality of the ad).
The same principle applies to all broadcast media. Print/Press usually refer to average issue readership as their headline number(i.e. reach), but the closest to the broadcast standard is when data like 'proportion of issue read' is applied which savvy media buyers would use.
All the above is based on audience measurement of the medium, and not of the ad. The media owner is responsible for delivery of the promised audience and environment. A good ad can't improve on that number, but a bad ad can fracture it.
So Joel makes a valid point as to whether this should hould also apply to ads. His argument about the effectiveness of an ad is valid in broad principles but not always correct. For example, in a 30" TV ad 'the sting' may be in the last few seconds so watching the first six seconds is irrelevant. Of course if 'the sting' is in the first six seconds then no problems. This also applies to online ads as it is a behavioural issue and not a delivery issue. The thing is all ads are different yet we're after a de facto standard. And media owners can't control the quality of the ads.
But if you look at the current status where 0-seconds can count as an ad viewed (i.e. it was initialised from an ad-server even though the recipient had already swiped the screen, and yes it is still around) it is much poorer than using duration weighting. The IAB recommendation of 2-seconds as a minimum threshold goes some way to tidying up the mish-mash of different standards (or lack of them).
John, the TV industry does not have a valid indicator of "opportunity to see" where commercials are concerned. Nielsen does not know whether a person who claimed to be "watching" a TV show when the channel was first selected is still in the room when a commercial break appears or whether the person leaves the room during a given point in the break. As a guess, the absentee viewer factor during commercials---regardless of their execution quality---may be on the order of 10-15% per ad, perhaps higher. As for attentiveness, sure, it varies with interesting or first time seen ads usually doing better, however, when such a message is buried in a ten commercial break, many "viewers" may have already left the room when it appears or are using another screen or just not paying any attention. I should add that asking Nielsen to rectify this situation without adding some sort of eyes-on-screen verification such as TVision might provide is probably fruitless. A dramatic change is required and I'm not at all sure that the TV ad sellers will support such a move.
Ed, by "Opportunity To See" I mean that the TV was on when the ad was shown ... so someone in that household had an opportunity to see that ad. The "Likelihood To See" would be when we factor out (i) that not everyone in that household was watching that TV set when the ad was broadcast, (ii) that some (or even all) of those in the household that were watching the TV when the ad was broadcast may have left the room, (iii) that those that stayed in the room when the ad was broadcast were not distracted by other things such as chatting, a phone call, a tablet, a newspaper or magazine, a book, a puzzle etc.
Scenarios (i) and (ii) are taken into account by the "button pushing" involved with the TV meter but the compliance rate would not be 100% (the last tests I saw in Australia was around 88%-92% were compliant) so there is a likely overstatement of "in room" audience.
Scenario (iii) is much more complicated. I remember demonstrating Nielsen's "Passive Meter" here in Australia in the early '90s. It was basically a humungous box with facial recognition that operated whenever the set was on - and it was expensive ... and it never got off the ground. I also recall that "whoopee cushions" on the sofa were trialled decades ago. But the biggest issue was privacy and I believe that it still is. Would you want a camera watching your every move, blink, nap etc. when you are kicking-back to watch TV? I wouldn't.
In essence OTS is probably reasonable. LTS is highly unlikely (though infra-red heat maps spring to mind).
John, while you and I plus a handful of others may have a good idea of what the numbers actually mean as well as how they are derived, 99.9% have no idea and they assume that OTS means ad exposure. Hence all of the blabbering about reach or frequency, frequency capping, etc. Worse, brand managers are deluding themselves wth reassuring but bogus reach & frequency estimates which are taken literally as exact indicators of their ad exposure.
Here in the States, there has always been ample evidence of the extent of commercial avoidance via leaving the room. Camera studies dating back to the late 1950s showed that as well as many others--Percy's heat sensors in the early 1980s, for example---- and, just recently, TVision is reporting a 29% asentee viewer rate. Yet nobody seems to be paying attention or drawing conclusions about variations by program or network type, demos, etc. I think that it's up to folks like us, who know what the numbers actually represent, to keep pointing out their inherent fallacies and, in my case, we will shortly release to our Media Dynamics Direct subscribers a recommendation for doing soething about it.