Ahead of the TV upfront ad market, Nielsen has leaped over a major hurdle as the Media Rating Council has accredited Nielsen long worked on "Big Data + Panel" measurement.
The new measurement combines Nielsen legacy panel measurement with granular data from 45 million homes from smart TVs and set-top-box TV-video providers, as well as 75 million devices. Those traditional and new TV-video providers include Comcast, DirecTV, Dish Network, Roku and Vizio.
“This effort marks the first time MRC has accredited a hybrid panel/Big Data product inclusive of persons level estimates,” George Ivie, CEO and executive director of the MRC.
This comes after the MRC recently accredited the integration of first party live streaming data as well as re-accrediting its traditional panel measurement of around 40,000 homes.
So far, the approval of first-party live streaming data consists of select NFL games.
advertisement
advertisement
The timing works well for many advertising brands, TV networks and streaming platforms amid talk around so-called ‘alternative currencies’ over the last few years -- this coming around the rise of connected TV/streaming platforms.
Executives believe the 'Nielsen Big Data + Panel' will provide much needed clarity and continuation for brands when it comes to cross-platform media buys which include targeting advanced audiences at scales.
Nielsen has been TV’s long-time "currency" when it comes to upfront, scatter and other TV advertising deal-making.
A year ago, advertisers and media agencies started adopting Big Data + Panel before last year's upfront advertising marketplace, which typically starts in June and runs through August.
Nielsen Big Data + Data is a component of Nielsen One, a cross-media measurement platform that helps media buyers and sellers understand who is watching what content on what devices. Nielsen recently expanded Nielsen One to advanced audiences, planning and measurement.
Nielsen One has not been submitted to MRC for auditing/evaluation -- although Nielsen is planning to do so in the future. Nielsen One provides deduplicated audience measurement across platforms, including TV networks, streaming services, and connected TV providers.
A long time coming major and welcome improvement reducing error and zero cells for actual persons measurement. Congrats to Pete Doe, his predecesors, and the Nielsen Company. How will other measurement providers respond?
Jack, this is a good move for Nielsen as it negates the biggest argument in favor of a rival national TV rating service--the people meter panel was too small. However --assuming no change in the 45,000 people meter panel, which will provide viewer-per-set factors to be applied against the 75 million set usage findings---we are still dealing with a very small sample to produce viewer estimates.
For example, let's say that 25,000 of the 45,000 panel homes actually are equipped with the button pressing devices that viewers use to identify themselves as "watching"---I don't know the exact number but I am told that its not 45,000--then we still have a major VPS ( viewer-per-set ) sample size problem for a lot of shows---especially if we try to break them out by demos. If a broadcast network show's episode garners a 1% set usage rating per minute that may mean that half a million or more ACR/STP sets were tuned in, depending on how the findings are weighted---which is great. But if only 25,000 panel homes provide VPS estimates , you are probably talking about 200 men, 250 women and 50 kids. If these are broken down by age--say 18-34--- we shrink our VPS sample down to 20 men and 25 women. And that's at a 1% household tune-in rating. More typical are set usage ratings one tenth that size. Think about that.
Also, consider that no indication of attentiveness is included though we know that at least half of the time when a commercial "viewer" is tallied, that "viewer was either absent or paying no attention. So its one step forward---much larger set usage samples, but several other very important steps not taken.
Thanks Ed. We agree it's a step forward m, good for the industry. There is always room for more sample (but it does substantially reduce zero cells which is a meaningful change). What's important is error over a schedule of dozens or hundreds of ads, not just one spot. And it's likely better than any competitors alternative.
As far as attentiveness is concerned you and I will continue to differ on who is actually responsible for attentiveness (or the lack thereof) to ads and whether it is the publisher or ad creator. If the latter, why should the publisher suffer? Unless we can attribute attention correctly we can't penalize either partner. In any case, better is nice to see.
Jack, we don't differ all that much about who is responsible for TV commercials being seen. You say---if I've got that right----that it's the adveretiser's responsibilty--and the agency, of course--but not the TV time seller. I say it's a shared responsibility with both parties involved. Naturally, a bad commercial will not fare well --even in the best of contexts---so ad quality and sensible brand positioning, claims, etc. are a vital component. We agree.
But there's more to it.
TV time sellers provide high, moderate or low engagement contexts by the nature of their programs and this has a definite effect on commercial exposure no matter how well crafted the ad messages are.
Also, demographics are an important variable---low brow and older viewers are much more likely to be attentive to commercials than younger ones. This, too, is controlled by the TV time sellers---again, by the nature of their programs. One might say that the time buyers can select out only high attention shows---so what's the problem---but, in reality, the CPM penalty that the buyers will pay ---even if granted this degree of selectivity--will be so high that it's a losing game. So they will be forced---by CPM pricing--to take bundles of good and not so good TV show contexts.
Finally the amount of ad clutter in a break has an effect on ad message recall---which means less effective ad exposures---again, even the best comercials suffer in cluttered breaks. So, here, as well, the seller is affecting the outcome.
A great post Wayne, and great comments from Jack and Ed.
"DO IT YOURSELF" ratings can be very misleadinng, and it is very positive that the advertisers and media agencies can see the benefit of having 'a currency'. AU has the same issue at the moment.
Ed, I do understand the importance of 'attentiveness' but I think that it is a very difficult vector to assess.
First, what is the definition of "attentiveness". It is not as simple as to whether a program is turned on or off (of course a program can still be on and someone leave the room).
Second, whether the attentiveness was positive or negative (or just ignored). It is not unusual for viewers to complain loudy at the television for an ad or product that they don't like. If we could measure it would it be the Positives minus the Negatives as the Value.
Third, the desire for attentiveness seems to be a requisite by the advertiser. I wonder how much they are willing to contribute to fund the research. And researching 'every ad' is many times more in the volume and the cost.
John, I don't see it as being very complicated at all. When people yell at a commercial that's a reaction--either to the stupidity of the ad message or because the break contains too many commercials or the commercial was just included in the previous break. These reactions are a function of both the quality of the ad messge and/or how it was presented or complaints about excessive frequency. These are both seller and advertiser responsibilities. Not just the advertiser's.
I believe that if we keep it simple--and stick just to the seller's contribution---either positive or negative---- we are on safe ground.
As for what attentiveness data can be used for, it's absolutely not just for "media planning" or TV "time buying". In our recently released report on TV attentiveness we explain the importance of attentivenesss in determining who was actually present and watching any pasrticular segment of content---programming or commercials. We don't get such information on a valid basis from any of the proposed rating surveys---or the new Nielsen design.We also explain how important it can be for advertisers studying the effectiveness of their ad campaigns as well as TV programmers in fashionig their shows.
Sadly, attentiveness has been positionned almost esclusively as a "media buying" tool and, consequently, advertiser CMOs and brand managers as well as TV programmers have not the slightest clue about the additional benefits they might gain---so they are no- shows in these deliberations---much to their loss.
Thanks Ed.
I was thinking that you meant that should be part of the planning process. For example, 1 million people watch a program but only 70% paid attention so include that as 700k in the campaign.
Yes, that would be OK, given that the advertiser provides their 'attention loss' value. In essence, a very handy model of the quantum of people who had some brand interest rather than those that just happened to be in a viewable place.
That would have two advantages - analyse the spot value, assess the return.
There is plenty of context variability to go around. That's why folks get "fair rotations". Adding complexity for all you require will cost that will be paid for by advertisers. In the end it willnot yield any more effectiveness than having fair rotations.