Video May Be Everywhere, But Industry Debate Finds It's Not Necessarily Equal (Sort Of)

All screens are not necessarily equal, according to a debate of top industry executives capping off the Digital Place-Based Advertising Association’s annual “Video Everywhere” conference in New York Tuesday. The debate, which was moderated point/counterpoint style by Simulmedia CEO and Founder Dave Morgan, positioned advocates on two sides of an opposing question: Should video advertising be treated “agnostically and equally” regardless of the TV, PC, handheld or digital-out-of-home screen it appears on?

Taking the pro side of that position were Posterscope CEO Helma Larkin and Initiative President Kris Magel. Taking the con side were Giant Spoon Co-Founder Alan Cohen and CBS Chief Research Officer David Poltrack.

But what began as a debate ultimately concluded as a consensus that the context of a consumer, a screen and the content they are being exposed to does matter -- especially when Initiative’s Magel caved on his opposing view, acknowledging several times that he agreed with Cohen’s and Poltrack’s point of view on that.



Except for one point: Magel said it is imperative for the ad industry to develop a “common metric” for measuring the value of an exposure of a brand’s message across all video screens, including the price that advertisers pay for them.

Poltrack countered that, citing years of research, including some recent studies, concluding that there are far too many variables to create a common advertising denominator that would account for all screen advertising experiences. He cited the fact that a high percentage of TV viewing is done with other people, and almost always with the audio playing a strong component of the “viewing” experience, whereas that might not be the case on a mobile, PC or even an out-of-home screen.

“In terms of pricing, it is a simple fact,” Poltrack said. “We believe that the values of our television space are different from the values of our mobile advertising space.” He added that in the case of CBS’ cross-platform distribution across screens, “we believe they have complementary benefits,” and that’s why the company charges a “premium” for “both of them.”

But that didn’t stop Initiative’s Magel from making his own observation about potential contradictions in Poltrack’s case.

“It’s interesting that the guy from CBS -- the guy I’m buying audience across TV screens, and tablets and mobile devices at the same price -- is giving me some interesting rationale on why I should lower the price [on the mobile/tablet side], because there’s not enough people watching.”

Magel continued that from a brand marketer’s or an agency’s perspective, advertising on any video screen fundamentally is about two things: “the return you’re going to get” and the “demand on that particular asset.” And he reiterated the need to have a common metric for basing those decisions on.

One thing all the debaters agree on is that video screens are not equal when it comes to the context consumers have while viewing content on it, or what that content should necessarily be. Later, after the panel discussion, Giant Spoon’s Cohen mentioned to MediaPost that one of his biggest clients always tests its TV spots with the “sound off,” because the marketing team wants to get a sense for how it would play on a mobile device.
9 comments about "Video May Be Everywhere, But Industry Debate Finds It's Not Necessarily Equal (Sort Of)".
Check to receive email when comments are posted.
  1. Ed Papazian from Media Dynamics Inc, November 4, 2015 at 1:43 p.m.

    I have to side with Dave on this one. It's a well established fact that even within the in-home "linear" TV viewing experience there are huge differences in viewer engagement and attentiveness depending on program content, time of day, whether the viewer is watching alone or with someone else, etc. Obviously, when you try to go "cross screen" with all of the added differences--like sound or lack thereoff, location, size of screen, the amount of distractions, etc. any "audience" based metric is going to provide a very misleading basis for comparison. What's needed is an additional measurement, perhaps involving the ability of the "audience" to recall what was "seen". This might be  a start, providing the media ad sellers accept the fact that the inevitable result will be a reduction in their effective audience projections and price their GRPs accordingly. Eventually, the truth will out.

  2. Michael Elling from IVP Capital, LLC, November 4, 2015 at 4:28 p.m.

    I wonder when a law will be passed that all ads on mobile devices must be less than 15 seconds.  Even 15 seconds seems like an eternity when you might just be watching 90 seconds of an hour episode, which is how more and more people are consuming their video these days. 

  3. Benny Radjasa from Armonix Digital, Inc., November 4, 2015 at 6:18 p.m.

    What is the value of being able to interact instantaneously with the brand through desktop or mobile video ads?  Is this value = to the increase CPM of desktop or mobile video?  Advertising campaign are given budget, it is natural that we quantify all of these “factors”.  Do direct response advertiser favor desktop or mobile video, and branding advertisers prefers to the reach on TV?  Obviously everyone perceived pricing is different on what cost how much.   I say let all of them duke it out in an auction, if only it is that simple.

  4. John Grono from GAP Research, November 9, 2015 at 11:30 p.m.

    Ed, while I like the idea or "what is seen" or "what is recalled", are you talking about the 'seeing' and 'recall' of the programme or the ad?   The broadcaster owns and sells the programme - the context that the ad content is carried in - while the advertiser owns and provides the ad content.

    So if the broadcaster pays for the analysis of the 'seeing' and 'recall' of what they own - the programme - who pays for the analysis of the 'seeing' and 'recall' of the ad?   Is it the advertiser?

    Also what are the implications of varying rates on this 'factor'?   Does someone who makes a crappy ad that gets low 'seeing' and low 'recall' compared to either the programme of the ad norm within that programme get to pay less?   If an ad is above the norm does the broadcaster surcharge the spot?

    Wouldn't that be rewarding the bad and penalising the good?   I know where you are coming from - I just don't like the potential implications!


  5. Ed Papazian from Media Dynamics Inc, November 10, 2015 at 5:33 a.m.

    John, I'm referring only to the program content and not necessarily as an ongoing study but mainly as a way to create an adjustment factor for comparison across screens. For example, let's say that the peoplemeter remains in vogue for in-home TV, including SVOD, but the PPM is used for smartphones. In the first case, the "respondent" is, at least claiming to be a program "viewer"; in the latter case no such claim is made and "viewing" is assumed. I would like to see what percentage of each "audience" could play back the content in a well designed, fair, research study using exactly the same content in both cases. If the answer for the peoplemeter was 76% but the smartphone figure was 45% then we've learned that the "audiences" as measured are not equal due, no doubt to lack of attentiveness and other factors. Next step: see how this varies by demographic and type of program content, but not for ads ad they function more or less independently.

  6. John Grono from GAP Research, November 10, 2015 at 5:45 a.m.

    Gotcha.   Basically a recall propensity factor by device type as the first step.   If that works, it can be extended to either programme genre, or daypart or actual programme (though programmes would need to be measured swiftly and frequently - but could be a genre/daypart index).   Then demos.

    I hear calls for that also for the ads ... which I think would send everyone broke.

  7. Ed Papazian from Media Dynamics, November 10, 2015 at 10:15 a.m.

    John, regarding my reservations  about the PPM assumption that the wearer/carrier is reached automatically whenever the device picks up the channel's encoded audio signal, I realize that the panel members don't wear/carry their PPMs all of the time. I think that the current figure is about 85-90% of the time, not 100%. However, the times in question---where it is not worn or carried---are mostly marginal ones with low TV activity---again, as I recall.

    I have seen old Arbitron comparisons of TV "viewing" as shown by PPM panelists and Nielsen's meter plus diary studies. This found that the PPMs found about 15%more viewing but this broke down showing a 2% higher figure for broadcast and a 130% increase for cable. Also, the largest "increases" were in the eary AM ( +19% ), daytime ( + 26% ) and after midnight ( + 132% ), whch suggests that the main cause of the difference was away-from-home "viewing".

    This is where any problem probably lies. I suspect that the PPMs substantially overstate  out-of-home viewing. A content recall study, as suggested previously, would either reveal this or put the issue to rest. Isn't it about time for a real validation rather than merely assuming that viewing is taking place?

  8. Ed Papazian from Media Dynamics, November 10, 2015 at 10:30 a.m.

    John, I forgot to mention that in the old Arbitron comparison, the largest gain in "viewing"---- comparing the Nielsen meter plus diaries vs. the PPMs--- ocurred among 18-34s ( +43%). Again, this tells me that out-of-home- accounted for most of the differences. Not that out-of-home viewing doesn't exist---of course it does. But how much of this activity is real and how much is due the the 100% viewing assumption underlying the PPM system? Isn't it time for similar comparisons using current data to be made so we can begin to evaluate the comparability question?

  9. John Grono from GAP Research, November 10, 2015 at 5:03 p.m.

    Thanks for the information Ed.   My concerns with portable measurement devices and carry rates actually relate more to radio.   For example, many people are woken by their clock radio (or via an audio stream to their smartphone these days) but aren't carrying the measurement device.   The other hole we found was daytime radio listening at work.   If the device is a wearble and they go into a meeting then OK, but if it is on an app on a smartphone, many businesses frown on phones during a meeting and it is left on the desk racking up listening minutes.

Next story loading loading..