Measurement standardization is critically important for industry cohesion, and yet the issue remains: We still do not have a complete cross-platform video measurement in place. It has caused significant fatigue and duplication, especially in the mobile and OTT space.
Further, publishers and networks are working in silos, measuring with different tagging systems, including SDKs and watermarks. But ultimately, they are devoid of a true valuation of their content.
Conceptually, the answer is simple. We need a single, open source, industry standard identifier to both recognize ads and content, while accounting for and capturing duration of viewing.
There are already initiatives underway that offer potential solutions.
The Coalition for Innovative Media Measurement (CIMM) is at the forefront, via their TAXI (Trackable Cross-Platform Identification) initiative. It is about binding Ad-IDs and Entertainment ID registry (EIDR) identifiers into both ads and content as they move across platforms. Additionally, Integral Ad Science (IAS) has created an open source SDK program for in-app viewability.
Yet, we still can and need to do more.
To that end, it’s essential for the industry at large to advocate for a universal solution for identifying when, where and how long a video ad was viewed. As part of this, duration must be considered as a key dimension in order to keep pace with the MRC’s efforts to revise its video viewability standards, which includes duration as a key feature.
In addition, duration is an essential measurement to determine the impact of brand storytelling — whether the message is being truly absorbed by the viewer. Marketers place a premium on audience exposure; it’s a valued currency.
In fact, the industry has already taken notice, with in-market products such as YouTube’s TrueView for Reach.
While some technical work is still needed, bringing together open source options, like TAXI and IAS’s viewability code, seems like a logical foundation on which to build, especially since, like mobile, OTT environments are app-based. This becomes increasingly important as advertisers seek to diversify spend and prospect new ways to reach an ever-fragmented consumer base.
We remain bullish on this issue, as it is in the best interests of all parties — media partners, brands, clients and more. A single solution creates tremendous efficiencies on all sides of the equation: Publishers and advertisers would only have to employ a single tagging solution. Any measurement firm would theoretically be able to identify and leverage this standardized process.
Now comes the real challenge: building momentum and consensus within the industry. Let’s get to work.
Agreed, David, however using time -on -screen as a measurement of ad "viewing" is not going to get you anywhere unless other metrics, such as ad message reacall and sales motivation are also included. Just because one video commercial is on a user's screen for 20 seconds while another is on for only 10 seconds doesn't mean that the first one was twice as effective nor does it mean that it was any more compelling. Also, you need to account for type and length of message---especially the latter. How do you compare a five second video ad with a 15- or 30-second TV commercial, using time -on- screen?
I have a feeling that we are asking for too much, too soon in our atempt to quantify media and ad audiences on a purely comparable basis. We can't even define or measure TV viewing on a minute by minute basis and certainly not for commercials. How do we expect to do this for digital platforms? There are so many variables to consider. For example, are mobile users more attentive to ads when they use their smartphones at home versus when they are walking down a street, joined by and talking with some friends while, at the same time visiting Facebook?
I think that we should backtrack and think about developing a proper way of determining "viewing" for various forms of video content and see if we can come up with effective yet meaningful ---or, I might say, "actionable"---definitions. Then we need to see whether what we come up with can even be measured with some degree of accuracy and affordability. Only then can we tackle the cross platform distinctions and try to zoom in on ads, not just program content. It's a daunting task and I'm not sure that it is possible. But we've got to move forward in logical steps---like working with building blocks----not bypass all of the issues and pitfalls to move quickly to "the answer".
Ed, I completely understand your POV. And it's quite uncommon that our views diverge.
But we have to consider what it is that the media owner is selling. For example, a TV network is (mainly) selling a block of 15 or 30 seconds in a piece of commissioned or acquired content that (typically) has mass appeal, and within that audience various demographic skews.
Placing and ad for a female skewed brand in a male skewed programme would produce much lower ad recall and sales motivation - and vice-versa. Placing a poorly executed, poorly conceived ad for a female skewed brand in a male skewed programme would produce even lower ad recall and sales motivation cet. par.
However, these are factors out of the control of the network. The whole concept of 'penalising' a network for poor creative strategy or execution, or poor media strategy and placement is a foreign and poor concept to me.
Should such a model come to fruition, networks should be able to refuse ads that are (i) stratgically bad (ii) poorly executed or (iii) poorly placed.
So what are the ratings for?
They are to enumerate the Opportunity-To-See the ad. They are not the Likelihood-To-See (and Buy) a product.
John, when I refer to using other metrics, such as ad recall, impact, etc. as a way to determine whether ads are seen I am not suggesting that all ads are equal in attentiveness. The issue is not how various types of ads might perform in different exposure situations but, rather,to determine what is the likliehood that an average commercial which appears on a user/viewer's screen will be noted or "watched".
When it comes to "linear TV" I feel fairly confident in my estimate that about 55% of the reported average commercial minute "viewers" actually watch the average commercial to some extent. This is based on numerous factors, such as leaving the room studies, eye camera and other ovservational studies and comercial recall findings. However, where we are heading is the acceptance of message- on-screen as a surrogate for commercial viewing and I don't buy this for digital media---especially mobile. That's why I suggest that other metrics be employed. If they show the same results as are found for 'linear TV" fine, but so far, the few studies I have seen, tell me that this may not be the case and advertisers will do themselves great harm if they start believing that device usage equals commercial viewing for all platforms.
I see what you mean now Ed.
I did a study a decade back and found that channel switching was the biggest "commercial break" avoidance. The thing was that in avoiding the commercial in the programme they were watching, they often ended up 'watching' commercials in other programmes (probably even less targeted programmes).
With latency delays it was extremely hard to determine exact content viewing and duration, so I used a 'dominant channel' and therefore its ad content. So while switching was large I was seeing an overall ad-break gross audience (OTS only) decline averaging around 5% but with mid-break closer to 10%. Given the increased competition and number of channels here in the past decade I am sure those numbers are now very conservative.
What I strongly agree with is that "OTS" levels cross devices can not be similarly compared. That is, an online ad served (generally alwys less than 100% of screen) can not be stacked up against a TV ad with 100% of the screen. I also suspect that the larger the screen size the higher the likelihood that an ad is seen (as noted in all the OOH visibility studies, magazine and newspaper studies). I could see a system where 'device LTS' factors could be overlaid on OTS data to correct for some of these skews.
John, I agree that in overt terms, channel switching is a clear indicator of commercial avoidance, however, it represents a small percentage of the presumed audience for an average commercial---probably only 3-5% are lost in this manner while 1-2% tune in from other channels. A far greater loss, but one that is not recorded, is leaving the room. The peoplemeters can't be expected to track this but I have seen other research---observational studies, data from heat sensors, etc. as well as less reliable claims made by viewers suggesting that, on average, a loss of 5-10% is involved, per message. By far the greatest loss, however, is when people who stay tuned and remain in the room, simply don't pay attention. This is the hardest to measure as even the observational studies---eyes-on-screen, mainly----can be misleading. A viewer can be paying some degree of attention while listening or while looking elsewhere from time to time---even if texting, receiving a call on a cell phone, etc. As a guess, based on all of the studies, I would place this form of audience loss per commercial at 25-35%, this being only an average and accepting wide variations around the norm based on how new or interesting the ad happens to be.