Nielsen, which is poised to spin off its consumer research unit into a separate company, this morning announced a plan to integrate its media measurement services into a “single, cross-media solution," also known as "the Holy Grail of measurement."
The solution, dubbed “Nielsen One,” is ironic given how many times the researcher has split up, reconsolidated and ultimately spun off its research assets again over the course of its history.
Nielsen said “One” is aimed at the “more than $100 billion video advertising ecosystem” and that it will utilize a “phased approach” that would begin with a rudimentary version launching in two years “with the intention to fully transition the industry to cross-media metrics” in four years.
Nielsen said it has already begun “unifying its technology” around the “‘One’ to modernize its panels, platforms and products” based on its new “ID resolution” system, and ultimately leading to the delivery of audience estimates at the “exact commercial minute” level.
advertisement
advertisement
Joe, while I understand why Nielsen is going in this direction---as a sound business development--- and protection--- move----what will probably happen is that Nielsen will be producing individual commercial or, more likely,commercial minute "audience" estimates for digital platforms which people will think are comparable with its TV measurements. The latter are, of course highly inflationary as they assume that everyone who indicated that they were "watching" the show at the onset of tune in "viewed" every second of content, including the commercials, has been totally discredited. As a result, an advertiser who thinks he/she bought 100 GRPs per week, actually gets about 40-50 GRPs where somebody actually stayed in the room and at least looked at the screen for a few seconds as the commercial played out. A similar assumtion will, no doubt, be used for digital "ad exposure" thereby creating even more phantom audiences for advertisers to pay for.
What's needed is an ongoing eyes-on-screen monitoring system that provides adjustmemnt factors so the agencies can reduce the inflated device usage stats to more meaningful levels. I know how such thoughts strike terror in the hearts of media time and space sellers---but consider the possibilities if advertiser CMOs woke up and paid attention to what is really going on. Once they realize that the reach and freqencies that their plans have constructed are vastly overstated---especially the frequencies----they may decide to spend more---not less---on media to correct the underdelivery .
Good point Ed. In fact Linda Yaccarino mentioned that NBCU/Comcast is looking to measure attention, during the panel session of the Nielsen One launch. It is not overly difficult technically to incorporate a TVison style subset within the so-called "Truthset", however the output may be harder to meet with widespread support. I am of the opinion that trying to change "everything at once" for a measurement system will result in paralysis. Better to build a North Star (as WFA have done) and move towards in a stepwise manner.
Andy, I understand your point about getting started and later improving the system. The problem is that all too often, the system---as approved by various parties with axes to grind---gets locked in and is not improved. Instead everyone pats eachother on the back and moves on to the next thing---meanwhile "we" are stuck with a compromise system which, as it happens, is biased in favor of the sellers who wil do most of the funding.
Even without eyes on screen, this is miles ahead of measurement for other media. Especially when you consider how firms continue to get away with grading their own homework.
Ed, with AU's OzTAM system we have the raw data at the second-by-second level. I assume the US is similar.
We continue with the 'average minute' rating for purely pragmatic reasons (such as not having to store and process 60x the data). So the question is how do we/should should we allocate the audience to a specific channel for the minute. There are various models - dominant channel, middle-minute etc. The thing is in the big picture it doesn't make a huge amount of difference - just a few more angels on the head of a few more pins.
As that tuning data is captured passively it is accurate (while acknowledging sampling methodology is critical for representativeness). Capturing 'attention' is more difficult due to humans being involved. It is technically possible to get 'eyes-on' data but it's 'invasiveness' is likely to ameliorate the panel quality - better data from less representative people. As for 'impact', again that is technically possible - and is generally done ad-hoc on very small samples for very specific content. Do we use that to come up with 'factors' to downweight to the average?
Should we get to the confluence of those three factors simultaneously I would posit that (i) the panel size would be miniscule and lose granularity of the data, and (ii) be non-representative of the population at large.
John, although I mentioned "impact" that was just to complete the picture. I do not believe that it's the mediums' duty to fund measures of "impact" which will vary tremendously advertiser by advertiser. What I and, perhaps, Tony, are pushing for is a parallel panel that does measure eyes-on-screen to the point where it is possible to create reduction factors by demographic, length of ad, degree of commercial clutter per break, daypart and program type that could be applied by those that see fit to the "raw" ad-on-screen findings. It is asking too much to create a panel that could do both the raw and attention level research at the same time as the latter would require a sample about three times greater and I doubt that the ad sellers would foot the bill. However, a smaller, tightly executed panel could supply data for all of the ads in the various situations I mentioned, above, as well as tallying the reach/frequency of specific ad campaigns and commercial executions and how all of their "impressions" performed on a collective basis. This is not a dream. It can be incorporated into any plan---and budgeted for--- right now.
I agree that first-party measurement is onerous and that a parallel secondary panel may work.
But over the years I have considered the variables involved that primarily affect 'eyes-on'.
For example:
Let's stop with that list and use my guesses - geography x 1, gender x 2, age x 7, HH size x 4, devices x 8.
That small list generates 448 'cells'. If we had a recommended n=30 minimum per cell, we're talking around a 13,500 people panel as a minimum. Of course we could conduct a determinants of 'eyes-on' and 'interlace' cells together that are correlated to reduce overall sample size somewhat.
But we need to consider that with a TV universe of 308 million and many programmes struggling to get 10 million nationally, a 3 rating is a good numnber these days. So what that means is that around one in 30 of the panel would be able to provide the required 'eyes-on' data for a programme. So while we have a theoretical n=13,500 sample, at a programme level the effective panel would be around n=450. That's rouighly the same as the number of required reporting cells, so at a cell level we're talking around n=1 per cell (and of course many cells that are zero). To get an n=30 effective 'eyes-on' cell sample you'd need a panel of around 400,000. And that is just our secondary 'behavioural' panel.
John, as I noted in my last comment, it is probbaly impractical to build a panel of such a size that it can measure and report eyes-on-screen data reliably for every individual telecast and every commercial or break. However, it is possible to run a parallel panel of reasonable size that creates---and, periodically updates--- normative findings ---or "raw audience" adjustment factors that account for key variables. These would be certain sex/age groups---essentially young, middle aged and old---as well as program type, daypart, commercial length and in-break ad clutter. For each commercial "exposure" situation you would have only six or seven variables and you would have the option to focus on only one or two if desired. The findings would be available as general adjustments, not made specifically per telecast and commercial. It takes a bit of working out, but there need be no fear that we would be carving up the audience into hundreds of unstable cells. That's overkill and would not materially alter the ultimate interpretation of the results.
With regard to other matters, the eyes-on-screen panel would also identify each program and commercial that appeared on the screen and could be used to determine the eyes-on reach and frequency patterns of most campaigns in aggregate as opposed to exposure by exposure detail. That would, indeed, be a huge step forward. In like manner, an advertiser could determine how its ad campaign performed relative to others---or rival brands. How many of the people who watched Brand A's commercials also watched those of rival brands B and C? How is this trending? What about ad campaign wearout? etc , etc. And the TV networks could determine how many people who watched their program promos actually tuned in these shows. A programmer could track the eyes-on-screen ratios when different performers were on-screen or correalte them with ratings to see if one could predict trouble or future success for a series. There are so many uses over and above the numbers grinding that characterizes time buying. Why not plan for a really big improvement at the outset? It's not a question of feasibility---it's a question of understanding the value of all of the information that can be generated. Why think small?