The measurement tool combines data from smart TV ACR (automated content recognition) data and set-top-box TV home viewership data across more than 39 million TV homes.
VideoAmp says this will improve inconsistencies and limitations of using a single data source -- as well as using traditional average minute commercial viewer measurement which gives all commercials within a program the same rating. Second-by-second measurement gives results for each advertisement within a program or event.
In a release, Tony Fagan, chief technology officer of VideoAmp, says “average commercial minute is a compromise the industry has had to make due to a lack of fidelity in panel-based measurement.”
Video Amp’s second-by-second measurement platforms offer insights including commercial index; impressions; frequency; average commercial audience; average program audience; advertiser reach; incremental cumulative reach and total viewers.
Throughout this year, VideoAmp has partnered with major media publishers -- as well as six large media agency holding companies -- for new currency measurement trials. This includes Paramount Global, Warner Bros. Discovery, and TelevisaUnivision, and others.
This comes as a number of new measurement providers are vying for prominence in terms of data to be used as the basis or “currency” for buying and selling TV/media advertisements -- especially among the big TV-based media companies.
All you can get with STBs and smart TV sets is a second by second determination that content appeared on a screen---that's all. It's not a measure of viewing nor even of who might be "watching". Tvision and others tell us taht, on average, about 30-35% of the time---seconds, that is--- when a TV or CTV commercial is presented no one is even there. Worse, because younger, affluent households average 3.0-3.5 residents while older homes have 1.6-1.7 residents the former use their sets far more often. But the individual members of such households are watching only 35-40% of the time. In contrast, as there are many fewer persons in residence---indeed, often only one--- when an older home tunes in, chances are much better that the older resident is actually the one who is watching. Hence set usage data will suggest that younger and/or affluent adults may be the dominant viewer group for many shows when, in fact older audiences are far more common.
Spot on Ed.
Around 15 years ago, when television was still healthy, I analysed a week of TV viewing in our largest market of Sydney uing our OzTAM ratings here in Australia.
The data was minute-by-minute (but based on second-by-second data capture) so I aggregated all the ad-breaks within a programme and was able to calculate the average drop in audience during the ads. It was a P2+ measure across all broadcaster and all programmes. It also relied on people 'logging-out" if they left the room/device during the ad break, and it also relied on presence rather than attention. The result was a drop of just under 5% in the total audience.
So what did I learn?
So what did I conclude?
That I wouldn't rely on such granular levels for my media activation and buying. But perversely I would use the very blunt 'broad average' when doing the strategic planning for the brand.
In an era of having access to and ability to analyze big data, we must take into account that not all data are created equal. There needs to be both transparency and caution in understanding what inferences are being made by deterministic (actual behavior collected at the persons level) versus probabilistic (assumed/modeled data of persons ascribed to the household / STB / Smart TV level) data. Using large scale datasets with probabilistic assumptions at the persons level is fine as long as there ALSO is some validation/calibration with actual person-level behavior is part of the overall model. As the maxim cited at the end of each 1980's G.I. Joe cartoon goes "And knowing is half the battle!"