Commentary

The Suite Smell Of Nielsen's Big Data Success


There was a time when Nielsen counted the number of people watching TV. These days it counts the number of data partners enabling it to model who is watching it.

Take this morning's announcement about a deal renewing -- and expanding -- its data partnership with CTV platform Roku.

"New multi-year deal integrates Roku's data to fuel Nielsen's measurement suite," proclaimed its press announcement, making me wonder what a "measurement suite" actually is.

I know Nielsen has panels, services, ratings, and even a "marketplace," but a suite? Sweet!

While details of the expansion and how it will fuel that suite were not disclosed, Nielsen touted its ongoing relationship with Roku's "large-scale TV data as an input to its Big Data + Panel measurement for both linear and streaming ratings.

"This will help deliver more accurate performance results for advertisers running campaigns on Roku and across the broader TV landscape," it added without explaining how or why.

advertisement

advertisement

It reminded me of a column I wrote recently about how big media agencies used to tout news about their big media buys and media supplier relationships, but lately have been focused more on announcing data-integration deals with partners -- including Nielsen. 

As with the agency data-integration deals, Nielsen's is big on Big Data announcements, not so much on how they actually work.

Earlier this year, DatafuelX's Howard Shimmel shared some analyses with me showing why Big Data is better.

In March, he provided one showing massive "in-tabs" for its Big Data + panel:

  *   4.1 million for Adults 18+ in Asian Households

  *   12.4 million for Adults 18+ in Head of House Hispanic Households

  *   11.0 million for Adults 18+ in Head of House Black Households

  *   5.9 million for Adults 18+ in Spanish Dominant Households

I never reported on it, because it confused me that you could have in-tabs -- which represent the percentage of panelists reporting data in a panel -- in a Big Data + panel. I mean, the last time I checked, the panel part of Nielsen's hybrid system had only about 100,000 panelists (42,000 households) in it.

"These are the estimated sample sizes for Nielsen’s Big Data offering. Remember that they are all subsets of the 45 million total sample," Shimmel replied when I asked him how he was using the term in-tab in relation to this analysis.

Hmm.. I have been thinking about what that actually means ever since, but this is the first time I'm writing about it.

Later, in August, Shimmel shared another DatafuelX analysis indicating that Nielsen's Big Data+ panel methodology is much more stable and produces more useable data than its old panel-based measurement system (see chart at top).

That's a good thing, considering that it was the whole rationale for Nielsen moving to modeled TV audience estimates derived from a Big Data database from disparate sources -- including a somehow expanded Roku one -- but it still doesn't answer my apples-to-oranges question.

I know this is the new way of the media audience-measurement world -- and it's even been blessed by, of all entities, the Media Rating Council -- but I just wonder what it actually represents.

Earlier this year, I wrote a column about long-time Nielsen and NBCUniversal exec Kelly Abcarian speaking at the CIMM East Summit and pointing out that "every" number we now use in the advertising and media measurement industry is "modeled."

Again, I don't know if that's necessarily a bad thing. But I used to understand how audience measurement panels worked and I don't understand how Big Data modeling works and whether the better results are actually meaningful results, or if they're just more useable ones.

 

4 comments about "The Suite Smell Of Nielsen's Big Data Success".
Check to receive email when comments are posted.
  1. Ed Papazian from Media Dynamics Inc, December 22, 2025 at 6:38 p.m.

    Joe, we have had this discussion before but it bears repeating. Yes, absolutely, a much larger panel for set usage will give you considerably greater stability--or consistency--in your homes reached projections--especially for low rated channels or programs.That's providing your big data panel operation is properly vetted by the MRC.  But this does not necessarily mean that the resulting ratings are more "accurate", especially when used in aggregate to generate total schedule GRPs across many shows --which is how most TV  audience delivery is guaranteed to advertisers. 

    The real question--which is rarely discussed--is about how the new system estimates the numbers and kinds of viewers that are supposedly watching when their sets are tuned in. Here, we seem to be stuck with Nielsen's current people meter panel. Unless a major expansion is planned, it's only about 27,000 people meter homes, not the much publicized 43,000--as the latter figure includes local market panel homes which do not feature the key button pressing option to signify that a given resident is "watching". Hence sample size becomes much more of an issue when the people meter panel---based on a few hundred  self-proclaimed viewers of a particular show episode is broken down by sex and age or income or race, etc. and projected against a set usage finding based on 500,000  "big data"homes tuned in for the same show. To put it bluntly we need a much larger true people meter panel than seems to be planned because the crucial viewer-per-set factors will, indeed, be rather unstable.

    Going one step beyond this, is the question of the credibility of the people meter methodology, itself. A far better method for defining viewing--in particular, attentive viewing--would be the use of an observational system which "photographs" the area facing each screen to establish exactly who is or is not watching--that's eyes-on-screen watching. But this approach has been nixed by the sellers as it would produce much smaller, though more realistic, "audience' numbers.    

  2. William Abbott from GAC Media, December 22, 2025 at 10:45 p.m.

    Joe,

    Joe — you make a fair argument, but the headline and chart don’t just miss the mark, they fundamentally distort the truth.

    Your column raises legitimate questions about whether Big Data + Panel is actually more accurate or meaningful. Yet the headline and chart declare success and stability, and that is flat-out wrong.

    When you look at the data hour by hour, Big Data + Panel is dramatically LESS stable, not more. In a four-week analysis across 32 networks, nearly half of all measured hours swung by more than 20% between Big Data + Panel and Panel Only. Thousands of hours moved by 50% or more, and among younger demos, roughly one in ten hours showed extreme up-or-down changes. That is not stability. It’s chaos.

    Let’s be clear here and admit this distortion originates with Nielsen’s own framing and promotion of the data, and it has materially harmed sellers. In our case, it has resulted in tens of millions of dollars in lost value in a two years period of time.

    The DatafuelX chart buries this reality by averaging everything together. Bigger data makes the averages look prettier, but underneath, the numbers are jumping around far more than they ever did before.

    So the headline reads like a victory lap for Nielsen & Big Data, while the substance of your column actually exposes the problem. That disconnect isn’t just misleading, but it also gives readers a false takeaway. If we’re going to have an honest conversation about measurement, Nielsen has to stop being dishonest in selling volatility as progress, and they need to be called out when they are guilty.

  3. Ed Papazian from Media Dynamics Inc, December 23, 2025 at 8:21 a.m.

    William, I don't have access to the data so I have to ask some questions.

    When you speak of hour by hour variations between the people meter panel's findings and the new system, are you referring to the total viewing for all networks or sources by hour or are you referring to the  ratings for individual networks and  sources?

    If those kinds of variations are evident for all viewing and they go up or down more  or less randomly--as is suggested--then that's a real concern. But if we are talking about individual show ratings, with an avarage tune in rating of .2% then I'm not so sure. For example, if every time a .2% rating in the old Nielsen system is represented as a .3%  the by the new system--which, technically  is a 50% "discrepancy"----I am far less concerned as I consider a.2% and a .3% rating to be essentially the same finding. Similarily, if the new method produced a .1% rating, this, too, is the same as a .2% as far as I'm concerned. The only time that concerns might arise would be if the difference was always in the same direction--say, pointing downward.

  4. Joe Mandese from MediaPost Inc., December 23, 2025 at 8:48 a.m.

    @William Abbott: You are fundamentally correct, and it's mostly due to the fact that I often write columns like this one too cryptically. If you read between the lines (including the use of the word "suite" in the headline), I was hoping readers would realize some of Nielsen's Big Data success was intended as sarcasm. But I've always appreciated Howard Shimmel's efforts to make his case and figured this might be as good a time as any to present it.

    Personally, I'm not convinced, and the most important part of this story (as well as the Big Data integration stories coming out of big ad agencies) is that we now live in an industry that prizes data for data sake and that it has become the grease that keeps the gears turning.

    It has been frustrating for me personally, because it's almost impossible to vet superior data integration claims when I can't see either the data or the integration, so all a trade journalist can do is report what people and companies say they can do, and let it stand on its own merits. But I can also keep raising the question: Is it actually better?

    That's what I was trying to do with this column.

    The other most important part is the ongong reference to Kelly Abcarian's brilliant quip that "every single number" the advertising and media industry now uses is modeled. Not primary audience measurement. I think that's important to remember, because modeled numbers are only as good as the model (assumptions being applied) and the data inputs they're being applied to.

    I have no doubt that Nielsen has some of the best data modelers in the world working for it. (See Howard Shimmel.) But they are models nonetheless.

Next story loading loading..