If this column was actually written – as the name “RTBlog” suggests – in real-time, it would have begun begun February 17 when speakers at the Association of National Advertisers’ media conference credited the Media Rating Council (MRC) with developing the “virtual ID” or VID that is the cornerstone of a new cross-media measurement system that will, for the first time, enable marketers to control the reach and frequency of their ads across all video advertising platforms and screens.
Alas, that’s not actually how journalism works. Sometimes the record changes over time as new information comes to light from different sources.
In other words, today’s column is being published to set the record straight and explain exactly what the MRC’s role is -- pre- and post- -- the development of the ANA’s cross-media measurement system.
While it’s true that the MRC originally conceived of the need for a VID when it created the U.S. ad industry’s “Cross-Media Audience Measurement Standards” in 2019, it didn’t actually create it. I know this because MRC CEO and Executive Director George Ivie reached out to set that record straight, giving credit to some of the industry’s best engineers who worked with the ANA to actually develop it.
The reason Ivie pointed that out wasn’t just to give credit where credit is actually due, but because the MRC cannot be directly involved in building the system, which it will be auditing and validating when it rolls out next year.
What the MRC did do, effectively, is provide a blueprint for anyone who wants to build a cross-media measurement system that aligns with the U.S. advertising and media industry standards, and that is what the ANA -- and others, including Nielsen’s One platform -- are being based on. Whether they fulfill that promise and prove up to snuff will ultimately be up to the MRC to determine, so it must remain neutral in the development process, Ivie explained.
Now that that’s clear, here’s a little more of my conversation with him about that.
RTBlog: Why is it important to distinguish who is developing the VID?
Ivie: We picked up a couple of times in some press articles -- most recently in an article you put out -- where we were characterized as the developers of the VID [virtual ID] that’s being using in, for example, the ANA’s cross-media measurement structure.
The VID, by the way, is also being used in structures that are being pulled together in the U.K. by ISBA and a project called Origin, which is doing cross-media measurement.
We didn’t actually develop the VID. A couple of years ago, we put out a cross-media measurement standard. And we’ve also been riding along with the ANA’s cross-media project as an advisor. That’s a very carefully structured involvement, because ultimately we are likely to have to audit the cross-media measurement system, so we can’t build something and then audit it. That’s incompatible.
The reason this is a nuanced concept, is because in our cross-media measurement standard -- which admittedly is a big, complex document, because it is a big, complex problem -- we sort of turned measurement a bit on its head.
We reasoned that going forward in the future, to measure cross-media and to deal with fragmentation, we need to really change the orientation of measurement. Measurement, which historically had been relying on things like panels exclusively, was becoming more and more challenged.
And in this cross-media framework, panels are still necessary, but they’re necessary for a different purpose. And that the basic structure of measurement and the origination of the data should come from more granular data sources -- things like first-party data from publishers.
And by that, I’m throwing everybody into the “publisher” mix -- big TV companies, internet platforms, content creators -- getting more granular data from those data sources. Or getting more granular data through MVPDs and set-top boxes, and getting tagging data from ads. All kinds of very granular data would be a better source and a more viable source for fragmentation in the future.
And something like a panel, which gives you insight into how people behave with different devices, how they allocate their time, etc., would be better used to calibrate that data rather than be the source of first-party measurement.
That orientation made it into the standards that we set way back when. And a natural extension of that is many of these large data sets are not attributed to people. And they certainly have privacy implications in today’s world. Because if I could get Joe Mandese’s television viewing habits in a very granular way, that presents a ton of privacy risks to Joe.
So it needs to be cleaned and anonymized and aggregated in ways that can’t be reverse-engineered that this is Joe’s data.
So a natural extension of that standard, is you need to develop an orientation like a VID and privacy sketches and other things that are used in the system to protect the compliance of this new large data set orientation from a privacy perspective.
So when you say we originated the VID, it’s true that our standards contemplated the need for that, but what we have to be very careful about is we didn’t build the VID.
The VID was actually built by a consortium and engineers that have been working within the ANA project and the ISBA [Incorporated Society of British Advertisers] project and the WFA [World Federation of Advertisers].
A lot of those engineers came from the platforms like Google and Meta, etc. but they’re also engineers -- for example, at OpenAP, which is the vendor that the TV business is pulling together to help share their original data through a system. And anonymize it and control privacy.
These VID models are not built by the MRC. They are built by engineers and we are going to be back validating that and audit it.
It’s a very nuanced distinction.
RTBlog: Thank you for explaining that. For me, this stuff comes in dribs and drabs before it seeps in, but that particular reporting came straight from the ANA’s media conference. They’re the ones that attributed the VID to the MRC, although we did report that they said the MRC isn’t actually developing it, because it will also be validating it.
When I hear other companies describe the systems they’re developing for cross-media measurement -- Nielsen One, NBCUnified, etc. -- it sounds like the same exact model: using census-level data, panels to calibrate it, and virtual IDs to maintain privacy and attribute the data to individuals in order to control reach and frequency. So a lot of organizations are changing the same solution.
Ivie: Yeah, a lot of people are chasing the same solution. And it’s the same solution that was contemplated in our standards release.
There are basically only a few ways to arrive at the measurement structures that are needed right now.
It doesn’t surprise me that people are coursing down similar paths and trying to do similar things, because there are only a few ways to do this.
NBCU has a lot of data and its got a lot of assets, and they are trying to use that for their business purposes.
RTBlog: When I speak to media companies, platforms, alternative currency providers, and even agencies and advertisers, they say the reason for [sell-side] certification is that they want to move fast and they believe processes like the MRC’s or conventional JICs [joint industry committees] move too slow.
Ivie: I believe there is merit there. The MRC has to move faster than we currently move. And trying to move better and faster is something the MRC has to do. But at the same time, I’ve testified in front of Congress multiple times and it’s never fun or easy, and they always ask questions about how deep is our rigor.
RTBlog: On another subject, algorithms, machine learning and AI are playing an increasingly important role in processing data in audience measurement and targeting. What’s the MRC’s view on that? Will you be developing AI standards for the industry?
Ivie: The key in our view is focusing in on how those assets are trained. There are models used to develop and train AI and there are ways to keep them up to date and make sure they’re complete and that they are fair and unbiased. It’s all about model training and model updating. And that’s where we’re focusing in our validation.
Of course, we we also focus on testing the efficacy and making sure it does what we expect it to do, but we focus at least half of our effort on training, updating and internal controls surrounding those processes.
RTBlog: In your sojourn, do you ever look at the role of ethics in any of it? Since AI can also interpret, extrapolate and synthesize things even with their model training, could they make ethically adverse decisions that humans wouldn’t necessarily know?
Ivie: What is difficult about your question is how broad it is. Do I look at ethics as a concept? I’d have to answer no, because ethics is so broad. You can call lots of different kinds of behavior unethical. And we are way more focused than that. We look at representation. We look at fairness. Would you call those ethical issues? Yes, you probably could call those ethical issues. But is it the full spectrum of ethics? No, it’s not. We look at things like coverage or do we have all the data flowing into a model, and especially in the training of the model. And we expect the model will make fair decisions.
For example, let's say we could get access to everybody’s checking account data -- we can’t right now -- but let's say we could compile that data and build a pretty good machine-learning model about how people spend money and what they’re likely to spend money on in he future.
The problem is, we’d ask questions like “Who in the population doesn’t use checking accounts? And what do kids do? And you know what do old people do who live in assisted living facilities.” We ask questions about coverage, completeness and representation questions.
But we don’t cover all ethical issues. Somebody might ask, “what right do you have to collect checking account information? That’s pretty sensitive.” Do we ask those kinds of questions? No.
It’s very complex. I would not stand in front of an audience and say we handle ethics. We handle some very focused ethical issues. But it’s not everything.
Very interesting interview and comments by George.
One thing that everybody---or almost everybody---seems to think about the MRC is that when it "validates" a service like Nielsen or Comscore that it is vouching for the accuracy of their findings---which is impossibloe not 0nly because the findings will inevitably vary in detail from one service to the other but mainly because there is no way to determine what exactly---down to the last viewer---is the truly accurate "audience" projecton for any TV show. As Dick Weinstein explained it to me long ago, the purpose of the MRC is to see if the research company is actually doing what it says its doing---not to say if the data is a correct representation of "the truth"
Since then the MRC has made great strides in an ever more complex research environment, in not only auditing the research companies but in developing acceptable rules of conduct on the techincal side, mainly. But it is not, as far as I know, dictating what the industry should set up as its metrics nor which methodology is "best"---or "worst". In short, we can't abdicate our own responsibility for making sensible judgements about this to the MRC. It's still up to "us".