Looking to “stabilize” its local TV measurement, Nielsen has announced a number of new initiatives.
Nielsen says it will start up “ratings stabilization” on July 30 in local people meter (LPM) markets. The statistical technique will be used “to mitigate ratings fluctuations that are caused by panel variability, as opposed to true differences in viewer behavior.”
Nielsen says this will “increase quality to the same degree that doubling panel size would.”
For some time now, local TV station executives — especially in small to mid-size markets — have been concerned over volatile viewership measurement because of small, or under-representative TV household sample sizes in their markets.
advertisement
advertisement
Nielsen also says it will address “unidentified audience” — this is when the TV is on but no one has checked themselves in as a viewer. Nielsen says using “viewer assignment” modeling will help this problem. Nielsen says about 3% of TV homes are excluded because “incomplete persons” data.
Also on October 1, Nielsen says it will start up ratings stabilization in 31 set meter markets. At the same time, diaries used in these markets — to collect persons data — would end. Nielsen says it will used “viewer assignment” modeling here to gain demographic viewing data.
Nielsen will also begin “code reader” technology in another 14 smaller markets previously measured by diaries.
With regard to its National People Meter service, Nielsen says it will double its sample size to 25,000 homes from the current 12,500 level by the start of next fall’s TV season.
One must wonder when the cable media will decide to upend Neilsen's long-time stranglehold on program ratings. Most every new TV sold these days is capable of sending info about which program is being viewed back to the cable TV provider. I would not be surprised if Comcast, Charter, TWC, or etc collect that info now. To do it right, they'd have to ask owner permission, collect the data, then aggregate it across all cable TV companies. Clearly, this would provide more accurate data than Neilsen's 25000 homes, which have a built-in bias that these viewers are knowingly being monitored. So how long will the multi-billion dollar Neilsen empire last?
Lord help us. Now, Nielsen is going to take situations where a set is on but no one logged in as a "viewer" and "correct" this by deciding who in the household was actually "watching"? And its "stabilization" process is the equivalent of doubling the sample size? Are the agencies are buying this????
Curses, another typo. I've got to be more careful. The last line in my response, above, should read, "Are the agencies buying this?"
Ed, are they trying to reduce the incidence of 'uncovered tuning'. That is they have tuning records that show changelines (I,e, channel switching) but no person-based record. That is, they 'know' that someone is in the room because they can see the channel changing but they don't know who. Either way it doesn't make a lot of sense to 'ascribe' viewing to that tuning if the incidence is 3%. I'd rather accept the lower in-tab rate and miniscule sample size effect on the SE. I am not au fait with the record level data in the US, but they used to have a two-stage process (set edit, then people edit). Ed or Nick do you know whether that is still the case. If so it means allowing HH tuning ratings through, but no accompanying PPL viewing ratings. Wouldn't it be simple to fault the HH and align them that way with the commensurate diminution in the sample size and in-tab rate? Or are these markets' sample sizes so small to make that dangerous.
You are probably right, John, though if this takes place only 3% of the time, I, too, wonder whether it is worth the bother. In any event, I'd still be interested in seeing how they determine who was "watching". On a related point, one of the questions that remains unanswered---especially when commercials are aired---- is how many people the system counts as "viewing"---based on their logged in claims of "viewing" made sometime earlier and referring, in reality, only to program content,----are in the room and viewing. I realize that this is a though nut to crack, but I grow tired of hearing people citing "commercial viewing"statistics, based on peoplemeter data, as if we know with any degree of certainty that these are valid.
Nielsen has added sample to several markets as well. As I see it, this will be an attempt to "validate" sample so that as much of it is counted as can be. Still, there is going to be another series of glitches in the ratings data just as there has been with the inclusion of metered markets, then the inclusion of LPMs. Some of these shifts are so significant that not only do you have to question the old methodology ... you have to question the new.
Right NOW it's only 3% of viewing, but isn't this the method they'll use to ascribe demos to nonlinear viewing as well, when the mobile measurement starts up?
And Ben, when the video providers discovered that their clickstream data had value, they all went to their separate corners to try to develop a system they could monetize. This resulted in competitive measurement services that are less than comprehensive (and all of which identify demos via modeling). The operators' lack of cooperation has created an advantage for Nielsen as far as national measurement is concerned.
Hi Suzanne. I strongly doubt it. The hardest part about mobile measurement is 'sandboxing' and device sharing. This all relates to de-duplicating viewing between and across devices, then again between and across apps within device. There is also the issue of stream measurement and correct conversion to average minute audience. This is all then predicated on the assumption that you can identify the content on the mobile device, For example a lot YouTube content is not readily identifiable. Non-linear measurement relies heavily on URLs, and many (most?) video URLs use some form of bitly which pretty much makes it undecipherable. There is a distinct lack of metadata - and most concerningly not video metadata standards (or coding and carriage) that I am aware of. Clearly these standards need to be global and not country specific. Once you get all that sorted out you then have to work out how to fairly combine linear viewing of network content with non-linear streams/views. I think we will probably end up with a system of video/audio 'injection'. If you want to be measured ... inject the necessary metadata and codes, otherwise viewing of your content will end up in the "All Other Video Content" bucket. Cheers.
John,
" If you want to be measured ... inject the necessary metadata and codes, otherwise viewing of your content will end up in the "All Other Video Content" bucket. "
This is the most reasonable, actionable and scalable solution, and it's just a matter of the operational and systemic tweaks needed to make sure it happens. Complex, but not impossible.
Less is more. And if not everything, frequency is the key to learning. Please pay attention to Mr. Papazian. In this matter, he knows more than most topic commentators combined. We are fortunate to have his wisdom 2015. Because Ed may be too modest for our good, allow me to reiterate his guidance that wisely comes as questions. Alas, there is more wisdom in questions well-framed than in any singular answers to them!
_______________________________________________________________________Ed Papazian from Media Dynamics Inc , Feb 20, 2015, 8:08 a.m.
_______________________________________________________________________
Lord help us. Now, Nielsen is going to take situations where a set is on but no one logged in as a "viewer" and "correct" this by deciding who in the household was actually "watching"? And its "stabilization" process is the equivalent of doubling the sample size? Are the agencies are buying this?
If what we read is true (i.e., "Nielsen To Launch 'Ratings Stabilization' In 31 Markets"), then what we are also witnessing is a Nielsen National Sample Expansion Plan that is ill-conceived and certain to do more harm than good.
Making-up viewing estimates (in this case, through mathematical modeling) is always and everywhere wrong. Nielsen estimates must be tabulated not formulated to be trusted. By calling fabricated Nielsen Ratings INDUSTRY CURRENCY, we have collectively taken leave of OUR SENSES and obtained the ultimate in SELF-DECEPTION. To save the industry from it's short-sighted, shortcut to hell, rigorous MRC Accreditation and transparent E&Y Auditing must be a pre-condition for any implementation of any plan designed to find viewing data in households where they do not exist in any way, shape, or form (i.e., set-meter only households). Further, Nielsen's claim that they are substantially improving stability (at the expense of validity, no less) would seem something of a sham. To reduce the SE (standard error of measurement) by half one would need to quadruple sample size. Is that happening? I don't think so. Who will pay for that now? And what is the value of stable numbers that actually systematic concoctions? Perhaps it's time for government intervention through the FTC, FCC or Congressional Hearings as they were conducted in the early 60's. Why would Nielsen denigrate a system that has worked well since 1987? Last Thursday, a senior Nielsen Research Officer effectively claimed that through modeling, Nielsen could transform a set-meter home into a people-meter home for the purposes of sampling. If that is true -- and it's not -- then we can henceforth say that every dog has five legs, because Nielsen clearly has the power to turn every dog's tail (tale?) into a leg. Since when do media planners, buyers and researchers accept research FICTION as research FACT? Perhaps David Hannum, in criticism of both P. T. Barnum and his customers, was right: "There's a sucker born every minute." I, for one, hope not. Omwards and upwards.
Dear Mr. Friedman: I believe you have provided either misleading data or a misleading impression. Please verify the above claim with Nielsen: _______________________________________________________________________"With regard to its National People Meter service, Nielsen says it will double its sample size to 25,000 homes from the current 12,500 level by the start of next fall’s TV season." _______________________________________________________________________It is my understanding, that Nielsen's National Panel is already approximatelty at this level when you add LPM households to NPM households in calculating National Sample size. Credit where credit's is due. Sincerely, Mr. Schiavone
All the agencies buy into this b/c they make more money creating "creative" for TV and digital. Not because it works better. And the clients blindly follow their agencies. When others try to educate them they (the clients) claim they do not have time to bother with the media and getting a realistic media mix that works, that this would be below their pay grade. All this back and forth doesn't change that TV and NOW digital are WAY WAY OVER DONE. Print needs to be put back in the mix, specifically Hispanic newspapers and ESPECIALLY print in Spanish and in culture serving Hispanics and blacks, really leading with the multicultural content and media. Rather than wasting all this money on TV and digital to do something other than drive sales (please a shareholder, a boss, follow whatever everyone else says is OK), another way to do better media would be to invest in black and African American creative.
Cara Marcano, CEO
Reporte Hispano
Hispanic media planning and buying for real sales growth
director, National Association of Hispanic Publications ~ NAHP
caramarcano@reportehispano.com
Martin I must point out that meta-data tagging may still only be a partial solution. With online video we rely heavily on the URL. This clearly tells us about the player but not the content. However, a content owners content may be spread across many players. In essence we would need (I) all content tagged with a globally accepted form of meta-data identifying the owner, the name, the series, the episode, maybe even censorship classification (though that is far from globally harmonised) (2) all browsers capable of accepting and rendering that meta-data (3) decisions made as to whether the coding is video or audio based (or both) (4) decisions as to minima (i.e. does the video have to be 100% viewable and with audible sound on - the same as TV's criteria in many markets), and finally (5) some way of aggregating all those bits of content - that may have been edited - on an even footing such as duration-weighted average minute audience. It ain't going to be easy.
While I am sensitive to your core concerns, Cara, you have stumbled into a serious philosophical error that changes the subject: The Fallacy of Defective Induction, which is a conclusion that has been made on the basis of weak premises. (Simply stated: All generalizations are dangerous, even this one.) The Media Research Directors that I know (e.g., Brad Adgate at Horizon) at respected agencies see Nielsen's sampling fallacies for what they are. Moreover, there are sophisticated clients (Jim Speros at Fidelity) that do not "blindly follow" agency recommendations and collaborate in the final analysis to meet their fiduciary responsibilities to their stakeholders. Would that Nielsen Shareholders understood the inadequacies of the Nielsen Management Team and the Core Business of Panel Research enough to see that Nielsen N.V. (NLSN) -NYSE fulfills its fiduciary responsibilities, as A.C. Nielsen did in the days of Arthur Nielsen, Sr. & Jr., for it was Arthur C. Nielsen who said "Employ every economy consistent with thoroughness, accuracy & reliability." Today, Nielsen Clients seem only to receive the spoiled fruit of "economy" run amuck.