VideoAmp Launches Second-By-Second Audience Measurement

Advertising measurement platform VideoAmp says it now offers a second-by-second measurement tool for TV-video marketers.

The measurement tool combines data from smart TV ACR (automated content recognition) data and set-top-box TV home viewership data across more than 39 million TV homes.

VideoAmp says this will improve inconsistencies and limitations of using a single data source -- as well as using traditional average minute commercial viewer measurement which gives all commercials within a program the same rating. Second-by-second measurement gives results for each advertisement within a program or event.

In a release, Tony Fagan, chief technology officer of VideoAmp, says “average commercial minute is a compromise the industry has had to make due to a lack of fidelity in panel-based measurement.”

Video Amp’s second-by-second measurement platforms offer insights including commercial index; impressions; frequency; average commercial audience; average program audience; advertiser reach; incremental cumulative reach and total viewers.

advertisement

advertisement

Throughout this year, VideoAmp has partnered with major media publishers -- as well as six large media agency holding companies -- for new currency measurement trials. This includes Paramount Global, Warner Bros. Discovery, and TelevisaUnivision, and others.

This comes as a number of new measurement providers are vying for prominence in terms of data to be used as the basis or “currency” for buying and selling TV/media advertisements -- especially among the big TV-based media companies.

3 comments about "VideoAmp Launches Second-By-Second Audience Measurement".
Check to receive email when comments are posted.
  1. Ed Papazian from Media Dynamics Inc, October 5, 2022 at 3:22 p.m.

    All you can get with STBs and smart TV sets is a second by second determination that content appeared on a screen---that's all. It's not a measure of viewing nor even of who might be "watching". Tvision and others tell us taht, on average, about 30-35% of the time---seconds, that is--- when a TV or CTV commercial is presented no one is even there. Worse, because younger, affluent households average 3.0-3.5 residents  while older homes have 1.6-1.7 residents  the former use their sets far more often. But the individual members of such households are watching only 35-40% of the time. In contrast, as there are many fewer persons in residence---indeed, often only one--- when an older home tunes in, chances are much better that the older resident is actually the one who is watching.  Hence set usage data will suggest that younger and/or affluent adults may be the dominant viewer group for many shows when, in fact older audiences are far more common.

  2. John Grono from GAP Research, October 5, 2022 at 7:17 p.m.

    Spot on Ed.

    Around 15 years ago, when television was still healthy, I analysed a week of TV viewing in our largest market of Sydney uing our OzTAM ratings here in Australia.

    The data was minute-by-minute (but based on second-by-second data capture) so I aggregated all the ad-breaks within a programme and was able to calculate the average drop in audience during the ads.   It was a P2+ measure across all broadcaster and all programmes.   It also relied on people 'logging-out" if they left the room/device during the ad break, and it also relied on presence rather than attention.   The result was a drop of just under 5% in the total audience.

    So what did I learn?


    • That the ratings system does capture some of the audience decline but doesn't measure all of the decline.

    • That the variation across channels, demographics, geography, time-of-day and programme content is extreme, and that the average is a pretty meaningless statistic to apply across the board

    • That the variation on a second-by-second basis is riddled with accuracy issues (you only need to be out by half-a-second to attribute the obervation to the wrong second)


    So what did I conclude?

    That I wouldn't rely on such granular levels for my media activation and buying.   But perversely I would use the very blunt 'broad average' when doing the strategic planning for the brand.

  3. Chuck Shuttles from HyphaMetrics, October 6, 2022 at 10 a.m.

    In an era of having access to and ability to analyze big data, we must take into account that not all data are created equal. There needs to be both transparency and caution in understanding what inferences are being made by deterministic (actual behavior collected at the persons level) versus probabilistic (assumed/modeled data of persons ascribed to the household / STB / Smart TV level) data. Using large scale datasets with probabilistic assumptions at the persons level is fine as long as there ALSO is some validation/calibration with actual person-level behavior is part of the overall model. As the maxim cited at the end of each 1980's G.I. Joe cartoon goes "And knowing is half the battle!"

Next story loading loading..