VAB: Nielsen's New 'Big Data' Measure Is Flawed

TV network advertising trade group the VAB is demanding that Nielsen immediately stop releasing its new more granular “Big Data” TV monthly reports because of numerous methodology issues.

This comes just weeks before the upfront TV advertising market is set to begin.

One major point made by the VAB is that Nielsen is offering two different measures, in effect -- its new Big Data effort and its current, long-time data set derived from its legacy 40,000-home Nielsen Television Panel -- and this could cause confusion.

Nielsen's “Big Data” effort includes more granular measurement of TV audiences -- including set-top-box data and smart TV Automated Content Recognition (ACR) data -- along with its traditional measures of total linear viewership by age and gender.

“The VAB had high hopes for Big Data being a big leap forward in what Nielsen’s measurement and currency can bring to marketers, but after in-depth analysis it’s clear to us that this first data set is rife with serious problems,” says Sean Cunningham, chief executive officer of VAB, in a statement.

advertisement

advertisement

In a letter to David Kenny, chief executive officer of Nielsen, Cunningham writes that Nielsen has failed to “provide meaningful level of disclosures, verifications or explanations as to how ‘Big Data’ was/is calculated, including the National Panel/Big Data calculations.”

Cunningham says immediate disclosures need to be revealed, including how the new data was created, and says there are many comparative examples that are troubling.

For one, he says, there were “illogical demographic audience” results when comparing Big Data versus current Nielsen panel-based data “within the same dayparts.”

Cunningham says: “Big Data claims significant audience gains on persons 2+ and P18-49...but also significant simultaneous declines on P25-54.”

He adds that “there are wild swings in comparative gender results in many dayparts / genres /programs -- in which there are double-digit plus/minus swings in male and female viewership that defies any logical pattern or explanation.”

With regard to sports TV programming, Cunningham says that in comparing the two sets of data, live-same day ratings are down, while “persons using TV sets for those same programs are up double digits.”

The VAB also says that while Nielsen touted the benefit of Big Data -- which was supposed to result in larger audiences for smaller networks -- much of this has yet to reveal itself with those shifts. “Comparisons (Big Data vs. current Panel) reveal that nearly 30% of the smallest 100 networks had lower overall audiences per Big Data."

The VAB cites other challenges, including that methodology has not been disclosed for the set-top-box data that comes from pay TV viewers, or for Automated Content Recognition data, which comes from smart TV sets. Another issue is that TV homes from smart TV manufacturer Vizio and smart TV platform and set-top-box app Roku have not been disclosed.

In a statement, Nielsen says, in response to the VAB: “Up until this letter was issued, we have not received questions from the VAB. In addition, a trade group associated with traditional TV channels is an incomplete and biased subset of the video marketplace. We prefer to work openly with the entire industry to get to the best measurement solution.”

With regard to the upfront ad market, it adds: “Based on feedback across buyers and sellers we made the decision to allow either data set to be used for trading in the fall.... Our approach will enable buyers and sellers to trade against big data plus panel metrics if they so choose, while giving our clients runway to adapt to this launch.” 

The VAB has had major complaints about Nielsen over the last year-and-a-half, including undercounting of TV viewing for most of 2020 through early 2021 due to the COVID-19 pandemic, which prevented Nielsen field engineers from entering Nielsen-panel TV homes and conducting maintenance.

More recently, the VAB expressed concerns after Nielsen revealed that it had understated out-of-home TV viewing.

7 comments about "VAB: Nielsen's New 'Big Data' Measure Is Flawed".
Check to receive email when comments are posted.
  1. Ed Papazian from Media Dynamics Inc, March 9, 2022 at 10:21 a.m.

    Wayne, when you compare the findings from one survey to another there are, invariably, differences. This was also the case when Nielsen switched to the people meter system in 1987  after testing it for a year or two. The old system featured household diaries used for viewer-per-set factors which were applied to meter-based set usage data to estimate viewer audiences; the people meter system required panel members to indicate if they were "watching" whenever a channel was selected. As any reasonable person would have expected, the people meter method produced different audience profile results---though for the most part there weren't huge differences. And everyone accepted the new methodology as it was assumed to be better than the diary component of the old one.

    Fast forward to Nielsen's new system, which, as I understand it, involves the melding of "big data" device usage information from many millions of homes from a mixture of panels and, of course,you will get differences from the 42,000 member people meter  operation where "linear TV" is concerned. The problem is how do you determine whether the new data is "accurate" when you never bothered to ask the same question about the old data ---until now?That's a pretty tough one to answer. So, unless the new audience levels---and demos---- are wildly different from what we have been seeing, I suspect that we may have to live with them.

  2. Sergio Stradolini from Kunay Ltd, March 9, 2022 at 3:03 p.m.

    I agree with you Ed, but you would also expect that the differences found between models are smaller with each change. Peoplemeters vs Diaries are assumed to be more apart than... well, whatever they do to produce Big Data, vs Peoplemeters. 

    The key is in the statement "double-digit plus/minus swings". Without knowing how the new figures are produced, it will be hard to accept that perhaps the Peoplemeter sample was flawed and Big Data is the real deal.

    Is it based on actual viewing? Is Nielsen fusing different datasets? Are the audiences calibrated before being released?

  3. Ed Papazian from Media Dynamics Inc, March 9, 2022 at 6:30 p.m.

    Sergio, the sad fact is that the "big data" ACR and set-top-box panels which are being used in order to obtain "granular" measurements of audiences for digital/streaming video as well as "linear TV" are only providing information on device usage---not viewing---and this will cause a vast inflation of commercial audience projections when these are provided. In short, we won'rt really know who is watching. One might argue that this is not so bad for smartphones  since you know who owns them. Which is correct---but how do you know if the smartphone owner watched the commercial when it appeared on his/her screen? The answer is you don't and this also applies to all TV sets ---smart as well as dumb ones.

    Let's face it, the sellers are clearly running the show and I believe that they are not interested in supporting---or funding----any service that indicates that 70% of their  "viewers" either aren't present or are paying no attention to an average commercial.  And, to be fair---why should they? Since they pay 80% of the cost for TV ratings while advertisers pay zero, are we really surprised about what is happening?

  4. John Grono from GAP Research, March 9, 2022 at 9:20 p.m.

    Great comments Ed & Sergio (hi Sergio, long time no see/speak!)

    My first reaction was that if the ratings, rankings etc. DIDN'T change then I would be REALLY suspicious.   As Sergio points out it is like when we switched from paper diaries to peoplemeters.   But different data doesn't mean it is flawed.   Flawed compared to what?   The COVID affected prior data that was deemed to be wrong?

    The basis of the claim seems to be "double-digit plus/minus swings".   Is the new data down - i.e. 'wronger'?  Maybe the numbers were up - 'less wronger'?   So let's have a think about that?

    Percentage changes alone can be misleading.   Ironically, the smaller the audience, the greater percentage differential usually happens.

    For example, you may have a barely watched programme in a poor time-slot.   Let's say it averages just 500,000 P2+ (yes it happens).   The US TV population is 308m in 121m homes.   So let's investigate a double-digit +/- swing using +/-10% for ease of calculation.   The programme's audience might have decreased to 450,000 or might have increased to 550,000.   Let's put that into perspective.   The 500k is a P2+ rating of 0.162%.   If it dropped to 450k its rating would be 0.146%, and conversely if it grew to 550k its rating would be 0.178%.   That is a diffential of +/- .017%.

    This means that the +/- of 0.017% variation in the 42,000 homes of the 100,000+ people would equate to a loss/gain of around 7 homes and around 18 people in the panel.   I also note that the comment was relating to particular demographics within P2+ (which would have a greater S.e.), so maybe we should halve those numbers.

    Yes the data is different.   It simply HAS to be different as we're talking about a changed methodology in a different time-frame with changed content.   We should then consider Ed & Sergio's reservations as to what is actually being measured!

  5. Sergio Stradolini from Kunay Ltd, March 10, 2022 at 4:47 p.m.

    Hi John, it's always a pleasure to read your comments... very spot on!

    There's very little to add, but one fact that is spinning in my head is that Live Spots programmes are down in ratings, whilst PUT is up during those same dayparts according to Big Data. These tend to be high in audiences % and it does sound odd if I have not misunderstood VAB's claim. But of course, we don't know if this is affecting prime time Football or a 2am local Fencing Tournament.

    Anyway, I am fully with you and I hope that the new methodology is disclosed -as much as it can realistically be- and so are the analyses produced by VAB, so we have a better ground to understand what's going on.

    Nielsen's phrase "a trade group associated with traditional TV channels is an incomplete and biased subset of the video marketplace" suggests that the fight is far from being over.

  6. Ed Papazian from Media Dynamics Inc, March 10, 2022 at 6:22 p.m.

    I would not be surprised if Nielsen's "Big Data" service shows that TV's average minute audience is not as old as it has been assumed---based on the people meter system. The reason for this is that the new data is all device based---not viewer based--- and it is likely that younger households with one or more children and/or teens in residence have more devices and use them more frequently on a collective basis than older homes without offspring present. It's also, possible that children, teen and very young adult viewing was under reported in the old system due to such panel members  failing to report all of their viewing while  their older counterparts were more likely to comply.  I don't know what weighting system Nielsen is employing to come up with "viewer" projections but it's a very tricky business to marry lots of set-top-box and ACR set activity information  with a limited sample of viewing claims to get an accurate picture.

  7. John Grono from GAP Research, March 10, 2022 at 7:15 p.m.

    Thanks Sergio & Ed.

    Panel composition could be an issue.  Traditionally, recruiting under 14s has been difficult, and in fact in many countries it is verboten.   In AU we recruit the household and then request parental permission to include the children.   It's not just 'any old household' but there is a matrix based on census data as to which homes can come onto the panel - things like the number of people in the home, household composition, age of the household members, number of working TVs, , TV susbscriptions etc. - and the tolerance levels are very small.

    The one thing you can't recruit upon is actual TV usage.   Once part of the panel, the HH TV usage can be analysed.   What could be happening is that some cohorts may not be fully compliant, or that in may be a simple fact that the more TV you watch the more the home faults so they are removed and replaced and this may generate a 'low viewing' bias.   I have no data to confirm whether that is true or not.

    So, maybe "Big Data" (i.e. scraped from other sources) is more representative and more accurate and that the panel data is being calibrated to the "Big Data" set.   That puts a lot of trust into the actual "Big Data" sets.   I've not seen any data on whether they are consistent between sources.   It is also possible that "Big Data" is biased to the "haves" and under-representing the "have-nots".   Who knows?   Apparently the VAB does.

Next story loading loading..