Nielsen Clarifies Expansion Plan, Says Modeling Won't Impact Ratings Until Clients Approve It

Two days after announcing a startling plan to introduce mathematical modeling to estimate nearly half the viewing in its national TV audience sample, a top Nielsen executive overseeing the initiative spoke with MediaDailyNews to set the record straight, clarifying that the method will not be included into its official ratings until clients have had time to review its impact and approve it.

“We need client acceptance for us to roll out this methodology,” Farshad Family, senior vice president-local product leadership, said in response to an MDN report that Nielsen might include the new method effective with the new TV season. Family said the method was one part of a broader plan to expand Nielsen’s national TV audience sample, another part -- the rollout of new people meter households -- will begin to impact official TV ratings beginning Sept. 29.

Family said about 400 new people meter homes would be added to the sample effective with that date, and another 1,800 would be added to the national sample over the next two quarters. He said that plan, as well as the new modeling method -- which Nielsen calls a “viewer assignment methodology” -- have been communicated to some clients, as well as industry watchdog the Media Rating Council, since 2013. But some Nielsen clients said they were surprised by the mathematical modeling component of the plan, and Family acknowledged that Nielsen may not have communicated it to all of its clients until it sent an official communication to clients on Wednesday.

Family said the clients that had previously been informed were mainly the ones who “actively participate” in MRC meetings. He also said Nielsen still has work to do with the MRC to convince the self-regulatory industry body that the new method passes muster and merits accreditation, but he said Nielsen executives are confident about the plan.

“We’re fairly confident in the approach and the methodology that we developed, however, we still need to demonstrate that with our clients,” he said.

Despite that confidence, Family acknowledged that the plan is complex and complicated, and said he needed to check with Nielsen’s “scientists” on several questions posed by MDN, including exactly how Nielsen can “double” the “effective size” of its national TV ratings sample by modeling demographic data from its national people meters to thousands of local TV set meters dispersed around the country.

Nielsen’s national people meter sample currently is nearly 25,000 households. Some of the increase will come from the 2,200 new people meters it will add to the national sample. The balance will come from modeling 13,000 local TV set meters and mathematically modeling viewing behavior from the national people meters to them.

Asked how 13,000 set meters could contribute to doubling the size of a panel that is currently nearly 25,000, Family explained it was due to “weighting” and the fact that some homes have more weight than other homes, and that it works out mathematically.

While the method may be mathematically scientific, it represents a huge shift for some Nielsen clients, because national TV ratings historically have been based on a sample of actual viewers, not modeled viewing behavior.

In addition to the national TV sample expansion plan, Family said Nielsen also announced an expansion of its local TV ratings sample to local TV ratings clients this week. That plan does not involve and mathematical modeling and strictly involves the rollout of more metered households.

That plan, he said, will add a total of 3,600 more local people meter households over the next two years.

“This is a big investment Nielsen is making,” he said, adding, “we’re committed to this.”

29 comments about "Nielsen Clarifies Expansion Plan, Says Modeling Won't Impact Ratings Until Clients Approve It".
Check to receive email when comments are posted.
  1. Ed Papazian from Media Dynamics Inc, September 5, 2014 at 10:56 a.m.

    I hope that someone at Nielsen can give a better explanation, especially about the impact of "weighting" the homes so they contribute much more to the size of the "effective sample". I also hope that Nielsen's "scientists" can provide some credible evidence that validates the proposed new approach.

  2. Nicholas Schiavone from Nicholas P. Schiavone, LLC, September 5, 2014 at 3:55 p.m.

    Sadly, with every new Nielsen explanation I become less clear about what was originally a clear, albeit risky, research proposition. Once again, I am in unrehearsed sympathy with Ed Papazian. I can only hope he has the tenacity to see us all through the interminable explanations and the required experimentation. So to end this long week, I have but 2 questions: Who is Mr. Family? And what problem is Nielsen really solving for its clients? Mr. Family has not been present at a single panel expansion meeting that I have been privileged to attend over the past year. Though I believe in the Scientific Method, I want to hear from Nielsen' Media Researchers that I have known for years - not some mysterious group of anonymous "scientists" who know what about media planning, buying and selling? Secondly, just what is the real problem for which Nielsen proposes this unreal solution. It is more than a little disturbing to learn that "weighting" fabricated viewing estimates improves TV audience measurement one iota. The MRC better get ahead of this matter that more and more sounds like an unmitigated disaster for the media, marketing and advertising trade.

  3. John Grono from GAP Research, September 5, 2014 at 5:32 p.m.

    Well that is the WORST explanation of hybrid modelling I have come across. It is a perfectly valid advanced research technique (not perfect but no research ever is) when done correctly. We used such techniques around a decade ago when we built MOVE - our OOH audience metric system. It was impossible to build a sample to measure people who drove, walked, bussed, trained, shopped, flew etc. and get accurate results. But we already had a lot of data on each of these outdoor activities (e.g. tickets, traffic counts, footfall etc.) We used hybrid modelling techniques to bring these disparate data sources together along with government and Census data. While TV is getting harder and harder to measure, ironically OOH is actually harder! I think their time-line is optimistic but this is heading in the right direction despite the awful attempt to explain it. And no, I don't work for Nielsen here in Australia, but I did until the mid '90s when I left for other pastures and to work on projects like the above.

  4. Nicholas Schiavone from Nicholas P. Schiavone, LLC, September 5, 2014 at 8:57 p.m.

    My first "Dear John" Letter:
    Dear John, I agree with your assessment of the reported Nielsen explanation. However, your defense of this modelling scheme in the name of "advanced research technique(s)" and natural "imperfections" is hard to abide. Not everything new is better. And none of my US research colleagues are pursuing "perfection." There are 116 million TV Households in the US with 294 million persons 2+. The population of Australian, to the best of my knowledge, is 22.7 million persons. Nielsen is not being asked principally to measure OOH viewing, which could account for 10% of viewing depending on program type and demographic. Rather Nilesen is simply being asked to get person and household estimates right from the start in the context of US TV Households. The US neither wants nor needs a "hybrid" measurement. Basically, it needs a first-rate measurement of set tuning and persons viewing. Then, we need a rational method of developing cross-platform measurement. (The work of ESPN in this domain is world-class, but it has needed over a half a dozen research companies to meet their business knowledge objectives.) If Nielsen is using this proposed modeling technique as a Trojan Horse to introduce expanded cross-platform measurement, then shame on all of us for not being forthright about our purposes and methods. Moreover, Nielsen already has respected and MRC-accredited techniques in place to address cross-platform measurement.
    Let's not confuse matters anymore than they are confused already. This is not a question of how to design a Prius for US TV measurement that makes use of gasoline and electric power sources. The real question is how do we take the "gas" out of Nielsen's convoluted client communications and discover Nielsen's real purposes and methods. Congratulations on building MOVE. I hope it's working for you. In the US, we need to MOVE-ON and get back to the best TV measurement in the world. Nielsen has done it before and can do it again, if its customer only have the will "to do the right things" and "to do things right." (See Peter Drucker, "The Effective Executive") G'day mate!

  5. John Grono from GAP Research, September 5, 2014 at 9:43 p.m.

    Well this is my first "Dear Nicholas" letter. I am flabbergasted that you say "The US neither wants nor needs a "hybrid" measurement", and then go on to describe ... a hybrid measurement system. Let me explain what a hybrid TV measurement system would look like. 1. You get as much RPD as you can from as many sources possible. You must note that this is merely a form of household tuning data and not viewing data. 2. You validate that RPD to remove things like persistent uncovered tuning. 3. You then edit that RPD to adjust for latency issues between the various RPD sources. 4. You now have (for a large chunk of the US) some really accurate tuning data. 5. You then sagely note that the size of the US vs. Australia is completely irrelevant - the principle is exactly the same, but the RPD coverage can/will be different. 6. You then attempt to de-duplicate the RPD - this is the extremely hard part as each RPD supplier has different subscriber data. 7. You then try to align the STBs to a Household structure (multi-box houses are a difficult proposition, as would multi-provider households. 8. THEN you start to look at your panel HH ratings data and use RPD-weighting to adjust the panel's projected HH rating to the 'known' RPD HH rating. This is very tricky and will (probably) rarely match all RPD services exactly but will be pretty close ... and closer than the panel. 9. You then use the demographic profiling, duplication, co-viewing and longitudinal benefits that a panel brings to the equation. 10. This can be done in one of two ways (or indeed both) - either by extrapolating the panel to the aggregated RPD HH ratings, or by 'donating' the panel characteristics to each individual RPD service. And yes, thank you MOVE is doing extremely well, and being five successful years old will re expanded upon soon. I'm sorry you think that Hybrid is should be dismissed as "some advanced research technique", because I tell you it works and it works well - but only when designed well. I for one refuse to be anchored in the world of old singular research techniques and am embracing the new methods we will need (and come to think of as normal one day) in this ever-fragmenting media landscape. OK, I've explained my rationale and approach - what do you propose?

  6. Nicholas Schiavone from Nicholas P. Schiavone, LLC, September 5, 2014 at 10:43 p.m.

    Dear John, Given the day and the hour, I propose I get a good night's rest and reflect carefully on your generous missive. Hopefully, my response can be as constructive and forward-thinking as yours. Thank you very much. Peace be with you - and me too. G'day mate!

  7. John Grono from GAP Research, September 5, 2014 at 10:54 p.m.

    Hi Nicholas. Bewdy mate ... she'll be right. If you crack it after a good night's sleep I'll be annoyed as I've been working on these things for almost 10 years and am continually trying to think of better, more effective, more efficient and cheaper ways! When I first started working on them I kept saying to myself ... gees I hope all this works ... lo and behold, when designed properly, they do! Sleep tight!

  8. Ed Papazian from Media Dynamics Inc, September 6, 2014 at 6:28 a.m.

    Interesting discussion, guys. I agree with Nick's original question about the true purpose of this attempt to expand the "effective" size of Nielsen's panel. Since most TV buys involve multi-show and telecast audience guarantees, the current sample size---with reasonable, periodic increments, seems perfectly adequate. When a seller guarantees the delivery for a total schedule ,involving hundreds or thousands of GRPs, the fact that some of these GRPs are from extremely low rated shows is not a critical issue, reliability-wise. If, on the other hand, the Nielsen initiative is actually intended to be the basis for expanding cross platform and OOH measurements, that's another kettle of fish, requiring many questions about sample composition and the compatibility of measurements to be dealt with. For example, I have seen research from Arbitron, years ago, suggesting that its PPMs generate much higher "viewing" levels----especially for cable-----than Nielsen's meters for in-home TV. Is this still true? And, if so, how can we accept the PPM findings for OOH coupled with the peoplemeter data for in-home viewing?

  9. John Grono from GAP Research, September 6, 2014 at 6:58 a.m.

    I think I just need to clarify one thing. In Australia OOH means the "Out-of-Home" medium - that is, billboards, buses, trains, airports, shopping malls and not viewing of television that is "away-from-home". The hybrid system I referred to earlier was billboards etc. and not TV. And yes the PPM tends to report higher reach as its thresholds are lower than the 8-minutes f the diary. PPMs have their place in hybrid radio measurement of course. In fact, the holy-grail of cross-media measurement will probably end up as some form of fusion and hybrid of each medium that sits on top of a large consumer usages and attitudes survey.

  10. Nicholas Schiavone from Nicholas P. Schiavone, LLC, September 8, 2014 at 1:51 p.m.

    Dear John, While I do not believe that you intended to commit a logical fallacy, I believe you have. The discussion of Australia's OOH (i.e., Out-Of-Home Media Other Than TV, Radio & Digital) measurement is a Red Herring, pure and simple. A review of Plato, Socrates and Aristotle reminds me that you have presented a fallacy in which an irrelevant topic is put forward in order to divert attention from the original issue. I fear "that your basic idea was to 'win' an argument by leading attention away from the argument and to another topic. [This sort of 'reasoning' has the following form:
    Topic A is under discussion.
    Topic B is introduced under the guise of being relevant to topic A (when topic B is actually not relevant to topic A).
    Topic A is abandoned.]
    This sort of 'reasoning' is fallacious because merely changing the topic of discussion hardly counts as an argument against a claim." John, you are a skillful debater and a clever marketer, but your argument is a distraction from the core issue which is Nielsen's inadvertent plan to degrade its first-rate, time-tested national TV audience measurement for the sake of APPEARING to improve statistical reliability BUT at the expense of validity, accuracy and utility. (Further, assuming Nielsen techniques were correct - which they are not - the enhancement in SE would be distorted and deminimis, based on reports to date.) In sum, the methodology for Australia's MOVE has nothing to do with the US' Methodological Research Question At Hand (MRQAH). Sincerely, Nick

  11. Nicholas Schiavone from Nicholas P. Schiavone, LLC, September 8, 2014 at 2 p.m.


  12. John Grono from GAP Research, September 8, 2014 at 6:10 p.m.

    And respectfully (and ironically) Nicholas you invocation of Plato, Socrates and Aristotle is in itself a genuine red herring also. Sadly you didn't seem to understand that using Australia's use of hybrid research techniques with MOVE to measure OOH was not a "change of topic" but an example of how the adoption of new research techniques (which are being clammered for by marketers, research companies et. al.) can be done. And successfully. What you see as 'degradation' I see as 'enhancement'. There is a long-validated principle in statistics called "effective sample size". Going out on a limb and using another example, you can construct a tightly managed stratified sample which produces a lower Std. Error than a pure random sample with exactly the same sample size. In essence the 'effective' sample size of the stratified sample size is higher than the purely random sample. I apologise if you also see that as a red herring - I see it as a basic fundamental of research statistics. You can make similar gains by using disparate (validated) data sources in combination - that is called hybridisation. Another example (wow I am a risk taker) is using tag-based data that measures traffic to a website (but knows nothing of the profile or behavioural patterns of the audience to that website), which can then be used in conjunction with panel data to produce audience data alongside traffic data. And yes, we've doing that here in Australia for around 4-5 years. We're even extending it to using 3rd-party data like Facebook. These are all hybridisation techniques, all in market, and all producing commercially accepted trading data in cost effective ways. Unlike the drunk under the lamp post searching for his keys because that is the only source of light, we have widened our thinking and are getting on with delivering research techniques that have been tripartite developed - media owner, media buyer and advertiser. If hybridisation techniques have nothing to do with the US' MRQAH then just maybe you are asking the wrong question, or maybe you need to shift your lamp post. I suspect we will just have to agree to disagree. Cheers.

  13. Nicholas Schiavone from Nicholas P. Schiavone, LLC, September 8, 2014 at 7:52 p.m.

    The Scottish writer, Andrew Lang, wrote of politicians that "use statistics in the same way that a drunk uses lamp-posts—for support rather than illumination." I fear, John, that you have demonstrated that researchers can no more enlightened than politicians at times. Politicians and researchers are both important to society, but they ought not to become intoxicated with the new brew of "new numbers" or "new calculus." Your paean to hybridisation (sic) is not only hyperbolic, but also statistical hysteria verging on heresy. Agree to disagree? Fine. But be careful not to become distracted any further by bright lights and shiny things. G'day mate!

  14. Patty Ardis from Ardis Media, LLC, September 8, 2014 at 8:11 p.m.

    Until clients approve it or prove it?

  15. John Grono from GAP Research, September 8, 2014 at 8:12 p.m.

    Nicholas, you are yet to provide even a glimmer of how you would approach the problem of measuring fragmented audiences. I would love to hear your thoughts and suggestions rather than disparagement. I also have to tell you that despite the popular misconception, there are no bright lights and shiny things in the world or hybrid research - just plain hard work. Like the apocryphal Canute this tide will not be turned back, and while Ned Ludd may have had a point history proved him wrong.

  16. Nicholas Schiavone from Nicholas P. Schiavone, LLC, September 8, 2014 at 8:47 p.m.

    What an unexpected surprise.
    The fragmentation of US TV audiences is not my problem, but the right of a free people. The maintenance of a stable currency is, however, a professional priority. Fracking, once thought to be an energy panacea, is now turning out to be an ecological nightmare. I fear that hybrisation is no answer to this challenge of US TV fragmentation. I accept hybridization for my transportation (i.e., I own & enjoy driving a Prius) but not for my television measurement & TV currency. Like fracking, the methods you propose will pollute the pure aquifers of viewing data that are ever more in short supply. I, for one, shall not engage in the destruction of the natural media environment. Peace

  17. John Grono from GAP Research, September 9, 2014 at 1:11 a.m.

    1, Not only is the maintenance of a stable currency a priority, but of equal if not greater priority is the responsibility of keeping that currency representative of current usage behaviour. A "Do Nothing" or "Wait and See" policy is simply not enough.
    2. I was unaware that fracking was a threat to TV ratings. Sure you have a fear of hybrid measures, but I can assure you that your drinking water will be safe, that there will no flames coming out of your taps etc. Many have a fear of the unknown. I remember that Everest was unclimbable and a man on the moon was an impossibility.
    3. And introducing fracking at all ... that is the biggest red herring seen outside of the Baltic Sea!
    4. I for one will not engage in the degradation of a stable currency by becoming a paid-up member of the Do Nothing Party.
    5. I promise that at the first sign of the OOH measurement system polluting our aquifers, flames in the sink and the bath then I will recant and alert the whole world.
    6. I'm off to take off my sabots and throw them into the cogs of the loom due to my elevated fear levels.

  18. Ed Papazian from Media Dynamics, September 9, 2014 at 8 a.m.

    While everyone talks about wanting "granular" data and more precise measurements of cross-platform behavior, the fact is that few are willing to obtain such information in the best way possible----by paying for a much larger sample. Hence, Nielsen's attempt to go the hybrid way to increase the "effective size" of its sample. I agree with Nick, about the desirability of maintaining consistency in the established "currency" of TV time selling and buying. But, I fear that the "we must have more and more data" mob is going to push Nielsen in the direction it seems headed, though, perhaps, wiser heads may take the time to carefully investigate the proposed methodologies and, most important, determine if they really produce valid results. By "valid" I refer to "accurate" projections, not statistical theories about "reliability"as it applies to sample size. The fact is that you cant predict real world accuracy by statistical machinations. For example, Arbitron's diary ratings for radio have consistently produced about 25-30% higher average quarter hour audience levels compared to the electronic "findings" of the PPMs, yet given the same sample sizes, the statisticians would have concluded that such ongoing differences were impossible. My point is that a much deeper degree of probing is required to investigate the accuracy of the various sources that are to be "married" in a hybrid system, including possible sample and other biases, before we move on to the best way to merge the findings into a common "database".

  19. John Grono from GAP Research, September 9, 2014 at 8:49 a.m.

    Ed, I TOTALLY agree. You will note I consistently referred to VALIDATED other data sources. If these other data sources are of poor or lesser value then you devalue the currency. We are fortunate (in a research way) regarding TV in that the Pay TV environment is dominated by a single player - Foxtel. So getting RPD from all 2.6m homes (around 30% of our HH) would provide VERY accurate tuning data for that sub-sector of the TV universe. Even if we only got 100,000 or 500,000 then it would be an advance on the 1,500 in our panel of 5,000. The caveat is that if it is a sub-sample the auditor (yes our system is audited every four weeks) must approve its representativeness of the 2.6m universe. That current sample of 1,500 reports on 118 channels. Clearly that stresses our ability to report the smaller channels. With hundreds of thousands of homes we should be able to report on many more if not all. The starting point would then become the RPD HH rating (indeed as it is in most TV audience measurement systems around the world) from which we could generate more accurate audience based estimates (again, as happens with most TV audience measurement systems around the world). This is achieved using the EXISTING panel. In essence, the RPD simply recalibrates the HH rating of the existing panel so that the starting point for those channels is more accurate. And one thing on the radio data. The PPM (Arbitron's, GFK's ... any and all of them) has under-reported cf. the radio diary for one primary reason - carry-rates. Given that radio's peak is AM/breakfast listening, people do not get out of bed and think ... gees I'd better grab the PPM or put the WatchMeter on ... they start their diurnal round. Once you are wedded to electronic data capture then all such listening is lost. With a diary, the respondent can 'back-fill'. While memory or claimed usage is often frowned upon by research purists (clearly of which I am one) sometimes it still does a better job! (But I am working on that as well.)

  20. Nicholas Schiavone from Nicholas P. Schiavone, LLC, September 9, 2014 at 1:21 p.m.

    Dear John, You have pulled out a good number of the logical fallacy stops in your hybrid media research organ. Your instrument sounds like a cross between a whoopee cushion and the Grand (Pipe) Organ in the Sydney Town Hall's Centenary Hall. I applaud your enthusiasm and intense interest in methodologies. However, you are no "purist", my friend, as you claim. Remember first, that neither of us agreed to pursue perfection. Having reviewed my prior comments, however, I see no rejection of hybrid research per se, nor do I see words that encourage a "'Do Nothing' or 'Wait and See' policy." I am glad, John, that you agree with Mr. Papazian. So do I. Now, we're making progress. So where is the problem? I fear we have a problem of instrumentation. Nielsen has simply chosen the wrong instrument to play the media symphony that has been written for the US. What the US needs is a piano, not an organ. In closing, allow me to observe (through listening carefully) Dvorak's "New World Symphony" theme can be played with simply a piano. Peace. Enjoy:
    G'day mate!


  21. John Grono from GAP Research, September 9, 2014 at 3:45 p.m.

    Dear Nicholas P. Or can I just call you Nicholas? I'm glad you bring up the Grand Organ in the Sydney Town Hall's Centenary Hall. I was waiting for someone to see the link to TV research and you delivered in spades. As you undoubtedly know it was (at the time of its construction) the largest pipe organ in the world. It also (controversially) included a 64-foot pedal stop, something that had never been attempted. It also aroused the fears of the cognoscenti of the time as it was an unknown and said people feared it wouldn't work, would cost too much, or (had they existed at the time) sound like a whoopee cushion. Pleasingly, this Australian landmark piece of engineering (both the organ and the building and its ceiling) proved the purists correct and the naysayers wrong. You could indeed say it symphonically heralded a new world (though of course it preceded From The New World by several years). Quod erat demonstrandum.

  22. Nicholas Schiavone from Nicholas P. Schiavone, LLC, September 9, 2014 at 6:05 p.m.

    Dear John, Another great Australian instrument: !
    Now, this makes me happy -- and ready for a visit & a purchase.
    Love that Australian Maple.

  23. Nicholas Schiavone from Nicholas P. Schiavone, LLC, September 9, 2014 at 6:05 p.m.


  24. John Grono from GAP Research, September 9, 2014 at 6:38 p.m.

  25. Nicholas Schiavone from Nicholas P. Schiavone, LLC, September 9, 2014 at 9:41 p.m.

    Dear John, Your red herring "comment"was graceless. We're not amused. Your inability to distinguish genuine from inauthentic communication is profoundly disappointing and points to a broad lack of judgement in research and other human matters. Hence, this methodological dialogue is done unless you apologize. No joke. Peace, Nick

  26. John Grono from GAP Research, September 9, 2014 at 10:09 p.m.

    As was your groundless comment that I am not a research purist, a comment that preceded mine. The ball is in your court.

  27. Nicholas Schiavone from Nicholas P. Schiavone, LLC, September 10, 2014 at 7:11 p.m.


  28. Nicholas Schiavone from Nicholas P. Schiavone, LLC, September 12, 2014 at 7:42 p.m.

    I believe the following to be true of MediaPost and its Editor in Chief, Joe Mandese,: “TIME (Magazine) approaches hard questions with a conviction that smart people of good will can disagree fiercely – but that discourse can be reasoned, enlightening, even entertaining. I don’t believe debate divides us; it draws us together, because the premise is that we are looking for the best answer…” - Nancy Gibbs, first female Managing Editor of TIME magazine

  29. Nicholas Schiavone from Nicholas P. Schiavone, LLC, September 27, 2014 at 7:46 p.m.


Next story loading loading..