Podcasting, one of the fastest-growing segments of the media marketplace, faces a big obstacle in terms of the validity of its audience measurement: it cannot meet basic ad industry standards.
That was among the significant disclosures made by advertising and media industry measurement watchdog, the Media Rating Council (MRC) during a first-of-its-kind public briefing updating the industry on a variety of standards and processes. Although not part of the formal presentation, the status of podcast audience measurement standards was raised during the Q&A section of the webinar, and MRC CEO and Executive Director George Ivie disclosed that there is a fundamental problem with the way the podcast industry currently measures its audiences that prevents it from meeting the MRC’s minimum standards.
“We want to do podcast accreditation,” Ivie told attendees, adding: “The problem is that the podcast industry is measuring prior to when we would ordinarily consider a good measurement position, I'll just call it that. And I'm using that term very colloquially.”
Specifically, he said the MRC has moved to standard of measuring audience exposure on the “client-side,” meaning when a consumer is actively rendering it on their device, not when the content was streamed.
It’s a similar standard the MRC has set for other forms of served or downloaded media -- including mobile apps, which in many cases are how consumer actually listen to podcasts.
“We want to measure on the client-side when the [ad] has had an opportunity to be seen by the user,” Ivie noted, presumably meaning “heard” in the case of an audio ad embedded in a podcast.
“So, in a podcast context, that would mean, the podcast was downloaded, it was initiated, being played, and the ad was also initiated. That's when you should measure,” he explained, adding: “Right now, most podcast measures are measuring pre-that. They're measuring either when the podcast itself was downloaded, but not nobody's actually started listening to it. Or maybe when somebody started listening to it, but there's no confirmation that anyone's heard any particular ad.”
Ivie emphasized that the MRC wants to audit and accredit podcast audience measurement, but he described the impediment as a “gap” between how the podcast industry has enabled measurement to date, and the MRC’s standard for measuring ad exposure in a downloadable medium.
While the MRC's reservations in this case make sense, once again we seem to be accepting device usage as equal to "audience". In other words, if a consumer downloads and starts to play a podcast that means that he/she is always present and "listening" to every bit of content---including ad messages---until the podcast concludes---or it's presence is terminated. Is that what advertisers really want from audience surveys?Of course the same questions apply to "TV" in its many forms.
@Ed Papazian: I think you have it backwards. The MRC's standard doesn't benchmark device usage or assume that anyone is listening to anything, but it is a higher order of measurement than benchmarking that an ad was exposed prior to a user downloading it. The reason why is that in downloadable media, there is no guarantee that an ad will render until it renders.
In a bigger sense, this is about an opportunity to hear a podcast ad, not that anyone heard it.
Joe, I think that I get it. My point is that in virtually all of these discussions about audience metrics, we keep focusing on whether the device presents the ad message---and that seems good enough to qualify as "opportunity to be exposed" -when it isn't. I don't know what the percentage is for podcasts but in the case of "TV" about 25-30% of the devices have nobody present when the ads appear on their screens---so how is that an opportunity to be exposed?
I know that I'm fighting a losing battle as its much harder to dig in and build attentiveness into the equation---so we simply must move things along as quickly as possible---and that will be the standardization of device usage as if it represents "audience". Of course they will say, we'll attend to attentiveness later----but I won't be holding my breath waiting for that to happen---not with the time/space sellers doing most of the funding---and controlling the final outcome.
@Ed Papazian: The first order -- which is the MRC's minimum standard -- is that there's even an opportunity to see, or in this case, hear an ad. The MRC has been pushing for higher orders too, including explicit outcomes going all the way to sales lift, brand lift, and even return on ad spending (see related story also published today: https://www.mediapost.com/publications/article/368185/mrc-nears-completion-of-outcome-standards-asses.html
It's also rethinking standards for conventional audience measurement, including TV in digital ad insertion environments where average quarter hour ratings are moot, because the ad may not even be present to all of that audience.
If the ad industry ever moves to a higher order of ad exposure -- like someone actually seeing and/or paying attention -- the MRC likely would weigh in with standards for that, too. But in the meantime, it's going straight to outcomes, which some might consider to be the end goal, anyway.
Joe, I appreciate the urge to get something done quickly that seems to treat all electronic media equally, namely using device usage as a surrogate for opportunity to see---OTS. I also understand why the media time and space sellers will support device usage as it gives them bigger numbers ---even if they are very inflated and misleading numbers. But it seems to me that the only way you are going to correlate "outcomes" with ad exposures is if you measure real ad exposures---not might be ---or maybe not---ad exposures. By skipping this vital step in our haste---plus to please the time and space sellers---we will ultimately defeat the oft stated purpose---of correlating outcomes with ad exposures---- as the OTS data we will use will have built in error margins so large that the statistical trails will be hopelessly muddled.
@Ed Papazian: What the MRC has been doing has neither been "done quickly" (it's taking years) nor does it inflate what media sell. If anything, it has deflated gross media impressions, because it at least requires that ad impressions are available to be seen. The MRC is working with the demand-side of the industry to set these standards (ANA/4As, 3MS).
If the demand-side ever steps up and requires actual exposure and/or atention, the MRC will set standards for that too.
Re. your last point about correlating outcomes with ad exposures, plenty of media, measurement firms, advertisers and agencies already do that (vis a vis attribution models, etc.). What's happening now, is that the MRC is setting minimum standards for doing it right in order to be audited and accredited.
Joe, my comments were not meant as an attack on the MRC but, rather, on "the industry's" failure to appreciate the need for using ad-relevant numbers in its efforts to figure out what the impact---or "outcome" ----is for all sorts of media buys across all sorts of venues. There have been lots of "attribution" or "ROI" models and I assume many more will be developed in the future---but most have failed to deliver the definitive answers that many expected---only statistical hints. The theoretical foundations of most of these models make sense---but they floundered because of the weakness---or absense---of the data they utilized. One reason is that much of the media information feed into these models ---ad spending, GRPs, "impressions" , etc. ---did not measure up to the task---correlating advertising contacts with advertising outcomes. I wish the MRC and the various committees involved the best of luck in their quest---but I doubt that using OTS will lead us to the holy grail. Just my ever humble opinion, of course.
Joe: Sorry but Ed is precisely on point and respectfully I think your position is weak.
This is of course complex stuff. The industry and MRC continue to use two nebulous terms which today reflect anything you 'bloody-well' want them to mean: "Audience" and "Impressions". These terms should ONLY be persons based and never device based, period. It is in fact the use of device based media measurement that is a key reason why attribution models generally do not work. As the Attention Council have posited, there can be no outcomes without attention or at least Eyes or Ears On. In other words, there must be target audience contact to produce a brand impact depending on the power and relevance of the creative message and the media environment or context.
Proof that the creative message is rendered fully to specifications on a device, panel or a printed page is fundamentally important but it does not produce an impression or OTS which as stated is a persons based measure. (Apparently in the digital video world we now have a different definition of impressions or OTS!! Is that an alternative truth??) Accessible Content Rendered Counts are also typically a very poor surrogate for Ad attention measurers - per Ed. Look at the Lumen/TVision data. This is why Dentsu and other media agencies are embracing this metric so thoroughly to more fully understand the media & creative contributions to Outcomes. So the demand side has stepped up?
@Tony Jarvis: "Respectfully?" "Weak?"
Well, it's ironic that you're citing the "Attention Council" (aka Adelaide) as a standard for measuring persons and not devices, given that their methodology is based on measuring "hundreds" of signals rendered on the devices of persons.
No one is arguing that human attention paid isn't something that should be "embraced," but with the exception of observational studies, neuro and biometric measurement, how do you do that? And how do you scale it?
Everything else is measuring proxies for people's attention (usually from device signals), not empirical attention paid.
Please explain what this "metric" is, how it works and why it is an empirical measure of someone's attention?
I'm not aware of any industry standard, or even a consensus for that.
Joe, based on the TVision experience--it seems possible to develop a national panel---perhaps in conjunction with Nielsen's assembly of panels---or separately---that provides eyes-on-screen measurements for most nationally available TV shows---including those streaming and CTV venues----subject to sample size considerations. The probable national panel size for such an operation would be at least three to four times what Nielsen has now as attentiveness is a subset of program "audience". So, naturally, such a service would be more expensive than the current 40,000 home panel used by Nielsen. That's why it's so important for advertisers to get involved not only with speeches asking for "better research" but with a fair share of the funding.
Such a service would be extremely valuable not just for time buying purposes but for programming decisions and, especially, for evaluating the performance of ad campaigns. But there would be issues---like measuring out-of-home activities for TV and digital device screens. These would have to be looked into but I believe that about 85-90% of viewing could be captured by an attentiveness based service---which is far better than doing nothing.
As for the other types of research you mentioned---biometrics, for instance---these would not apply as they go more to ad impact, and, being honest, that's asking for the Moon. We aren't going to get such metrics built into a new TV rating service, though they can be used in a more selective manner.
@Ed Papazian: So a panel of respondents agreeing to have their attention measured via computer vision?
If that's what the industry wants.
Wonder why TVision isn't MRC accredited or why that is at oddals with the standards the MRC has been setting for other media exposure and outcomes.
I don't mean to pre-empt Tony's reply, Joe, but for TV set viewiing---including streaming as well as desktop PCs,here's a hypothetical process that would be used to measure attentiveness. The new service would first detetermine whether a screen was activated and what content was on the screen. Then the "camera" working that device would note if any member of the panel home was present and whether they were watching the program. It would also note the appearence of a "new" potential viewer from time to time whose status as a program "viewer" would be verified by the "camera". Along comes a commercial break and the system knows exactly who in the home---plus any visitors---had been watching the program just before the commercial break. Then each of these people would be monitored to see if they were present when the ad appeared, whether they looked at the screen and for what duration---or sequence of durations. And that's about it---you get a commercial by commercial audience projection as well as lots of other stuff to analyze.
Joe, to my knowledge TVision is not accredited by the MRC as it is not being used as a TV rating currency source. As for people being willing to be "watched" by electronic cameras, that's easy. I'm sure they are all promised that the records will be totally confidential---just as applies to Nielsen's current rating panels as well. With panels you are never going to get super high cooperation---or participation---levels. That's a trade-off to getting lots of information cost efficiently while doing one's level best to keep the panel as representative as possible---which is what the justified fuss regarding Nielsen's failure to keep in touch with its panel last year was all about.
Of course, any new service using electronic cameras to observe audience behavior would need to be thoroughly vetted---for example, how is the data tabulated or weighted, are all screens in the home monitored, how much of the panel is turned over each month---or year, how are replacements selected,etc. etc.---just like Nielsen is vetted now.