The third and final day of the ARF’s annual Audience x Science Conference Wednesday focused on the underpinnings of “cross-media measurement,” consumer privacy (or is that piracy?), consumer identity, and “approaches & methodology.”
To achieve accurate, meaningful, holistic media measurement across just video platforms that can earn Media Rating Council (MRC) accreditation, these areas must be resolved. As Procter & Gamble’s Marc Pritchard wisely stated at Tuesday’s event, any solutions must be driven by a “consumer’s complete experience.”
Moderator Alice Sylvester, a partner at Sequent Partners, and the most recent recipient of the prestigious Erwin Ephron Demystification Award, asked her expert panel on identity resolution solutions, “How close are we? We are not going to get there, right?”
Her panel -- including LiveRamp’s Travis Clinger, Blockgraph’s Jason Manningham, and TransUnion’s Matt Spiegel -- all disagreed. They felt that robust, authenticated, interoperable audience (users and/or households) identifiers, a cornerstone to cross-platform video measurement, is not so much of a technical problem.
They did admit that while their ongoing development is central to targeting, identity matching and measurement, privacy and consumer trust will remain an issue. In building a multi-integrated data set, understanding the level of compliance and trust in any “matched ID resolution” within that set, will be fundamental.
Their optimism for interoperable IDs across various data environments was not shared by Analytic Partners’ Mike Menkes or Real Chemistry’s Seth Duncan.
They based their positions primarily on the typical 30% opt-in levels in the digital media ecosystem and the virtual non-availability of walled garden data from the techno giants and others. Overcoming the consequent non-representativeness based on the exclusion of the non-opt-in population and users of walled gardens media is apparently frightening. The data, they suggested, “will always be imperfect.”
Menkes also slayed a Facebook dragon from a presentation earlier in the day by the social network’s Vice President-Advertising Ecosystem Dennis Buchheim. Buchheim had suggested that based on Facebook’s research, “last click” offered an opportunity as a meaningful proxy metric for a go/no-go decision based on cost-per-incremental-conversion for a brand.
Menkes categorically stated that “last click is not good enough” and can overstate marketing effects by up to 10 times -- also noting the differences between addressable and non-addressable ads.
Buchheim, only at Facebook for 90 days from the IAB Tech Lab, made an eloquent plea for the development of foundational technology standards that enable growth and trust in the digital ecosystem. He also expressed his desire for Facebook to work in collaboration with the rest of the industry as an ally.
Based on his extensive tech/data experience and expressed personal philosophy of keeping the internet, free and embracing end-user value, safety, and integrity, I do not doubt he was sincere. However, the entire presentation came across as a feel good, Pollyanna effort by Facebook that is the premier member of the walled gardeners’ club.
Forgive me, but anyone who has watched, “The Social Dilemma” or has read Charlie Warzel’s articles in The New York Times on Facebook, cannot help but be skeptical.
As consumer research panels are clearly not going away, at least as part of the cross-media measurement solution, Kantar’s Jon Puleston shared extensive work on significantly improving the quality of even the most basic survey responses on demography and media consumption via improved questioning techniques.
This seminal work will hopefully be the driving force for a special ESOMAR (European Society for Opinion and Marketing Research) committee of its Standards Council to establish best practices to protect the value of panel data.
“Calibration panels” are considered a critical element to any holistic cross-media integrated measurement approach and are endorsed by the ARF’s Coalition for Innovative Media Measurement (CIMM). Their importance to the methodological and data mix was agreed by both Nielsen’s Molly Poppie and Comscore’s. Michael Vinson -- wow!! 605’s Caroline Horner also agreed.
Paramount among the many benefits such calibration panels would bring is the ability to de-duplicate audience and consequently estimate campaign reach and frequency, which remains a fundamental brand planning and buying metric.
So, as Sequent Partners’ Alice (Sylvester) went into wonderland, things got “curiouser and curiouser.”
Her question, “We are not going to get there?,” regarding identity resolution solutions was, I suspect, rhetorical. What the actual answer does for progress on accurate, objective cross media measurement is extremely worrying.
Tony,One of the big issues seems to be getting "accurate" cross media reach figures. But think about it. If you are able to estimate---or tabulate---the individual reach levels of your TV schedule and your radio schedule and a digital video schedule as being 60% , 55% and 35%, respectively, getting a fix on their combined reach---not to be confused with the ads beeing seen or heard----is pretty simple. You don't need a $ trillion national panel of 10 million homes get come up with a workable answer. It can't be less than 60% as that's the highest figure for any of the three components and it can't be higher than 100%---so you are estimating how much of the missing 40% does radio or digital video add to the TV reach. If you use the long established random dumplication formula this tells you that TV plus radio gets you an 82 reach while adding digital video brings it up to 88%. Now how far off would those estimates be compared to the actual findings of a $ trillion panel study? Probably not more than .3. Ah, you may say, but what about much smaller buys---like TV getting only a 2.5 reach and radio only 1.7 and digital video .2?The random probablity estimate for such a media mix is 3.9. Horrors! What if the "real" figure is 4.0 --or even worse, what it the "real figure is 4.1?---based on our $ trillion panel's findings? Woe is us? Have we doomed our ad campaign to failure? Or does it make the slightest difference. Think about it---why all of the fuss about cross media audience duplication? Is that really the big problem? If we spend huge sums to get a panel's answers---which may or may not be accurate anyway---are we accomplishing anything?
Thanks Tony and Ed.
Calibration panels to generate cross-media reach is a suitable and robust enough method to do media planning.
However, there are two 'misses' in that method. The first is that we don't have any robust post-campaign reach data delivery method. Take for example children's media usage (I use that as many Privacy Laws forbid Under 14s). We have pretty good transparency for TV and magazines but not much more. Applying an 'average' reach model where there are millions of options such as online is likely to be very misleading. The wide behavioural variations between male/female, 6-year-old vs. 10-year-old etc. would mean that reach would regress to the mean when the strength of online advertising is provide laser-focus targetting.
The second is in order to provide a sufficiently usable and meaningful calibration panel would mean millions upon millions of panelists, and given the immediacy of the electronic media it would ideally need to be daily and at least weekly. The number of permutations would be massive and would probably be the day-after the day-after the day of the ad. Bring on quantum computing. A 'deliverable' calibration panel would probably not provide the precision that the advertiser, agency and media owner would be looking for. So who would back such a blunt instrument?