Commentary

Pay No Attention To That Man Behind The Curtain: Questions Nielsen Would Prefer Went Unanswered

From the moment I stumbled onto the television audience measurement industry in 1999, what I have found utterly unexplainable is not how Nielsen maintains its stranglehold on the industry -- with that I am well acquainted. No, what has perplexed me is how Nielsen executives manage never to talk about how they do what they do. More than a billion dollars a year in revenue (not to mention the $250 Billion or so spent in Los Angeles and New York chasing Nielsen's approval) and virtually no published analysis concerning changes in data collection technologies, ascription policies, and weighting or stratification of the sample? Zip. Zero. Nothin'. Nada. It would be comical were it not so fiendishly frightening.

Anticipating Nielsen's cry of, "The MRC knows everything," I refuse to concede that point. As someone who is forbidden from joining, participating in or observing the MRC, I have no idea if the MRC knows how Nielsen behaves. I have recently spoken with members of the audit committee who say they have been waiting years for answers to basic questions.

advertisement

advertisement

Having only one entity aware of Nielsen's "secrets" does not sit well with me. It is my opinion that Nielsen's secrets are kept secret because if the information became public, the industry would demand change. I can see no scenario in which secrets can be defended as an integral part of television ratings, especially when a single entity garners more than 99% of the revenue associated with the industry. I would suggest that the only organization that benefits from this scenario is... drum roll please: the company that raises prices every year despite record profits ... the company that routinely acquires potential competitors... the company that signs its broadcast clients to long-term staggered contracts... the company that ignores the MRC when it suits them... wait for it ... Nielsen.

So what to do? Well, here's a start. While sitting at lunch the other day with several people who work for an agency -- will not name names lest Nielsen use their relationship with me against them in some way -- I was queried: "If you were a Nielsen client, what 10 questions would you ask Nielsen concerning ratings?"

Here's my list of eleven questions (when possible, I try to give my clients more than they pay for) aimed at uncovering some of what is assumed, ignored or swept under the rug related to local television ratings. Any Nielsen customers should feel free to forward this list to their representative. As a bonus, I will pay $100 to the first person, Nielsen employees not excluded, who sends me an answer in writing. Just tell me who gave you the answers. I will either cite the source or withhold the identity (responder's choice) and post the reply on this blog. As always, please feel free to contact me with better questions, any comments or criticisms.

1. During the 2006-2007 television season -- in Nielsen's "best" market -- what percentage of the households contacted were put in the panel and remained in the panel for at least six months? How about the "worst"?

2. How is the method of television distribution [cable (analog/digital/advanced services), satellite (DirecTV/Echostar), over air, telco] reflected in the panel?

3. How many variables are involved in local sample stratification? How is stratification validated?

4. Why is it necessary to weight a local sample, and how is the weighting accomplished?

5. Do local television ratings for a given period ever reflect that more than 100% of the television households are watching? How does this happen -- and does it occur for all three local data collection methods?

6. Now that Nielsen has transitioned a large number of markets from tuner meter & diaries to local people meter data collection, will Nielsen publish direct comparisons for the time periods when both methods were deployed concurrently? If not, why?

7. For local ratings in a tuner meter & diary market, what is the error associated with programs with a 20 household rating? A 2.0 rating? A 0.2 rating? How about Males 18-34? How are the errors calculated -- and what assumptions must be made for the calculation to be valid?

8. What is Nielsen's rough estimate for error attributed to bias associated with panel recruitment, weighting and local data collection technologies? Have Nielsen break it down by individual bias.

9. What television technologies disqualify a household from being in the panel today? Are there any technologies that do not disqualify a household but are not measureable?

10. On a typical day, what percentage of deployed active/passive meters identify/mis-identify content -- be it network, program or commercial -- or fail to recognize a content signal? (e.g., what percentage of the meters deliver completely clean data without need for ascription or error correction?)

11. On what dates were the following television technologies fully supported, measured and reported in local television ratings reports such as the VIP?

a. High Definition broadcast networks

b. High Definition cable and satellite networks

c. Video on Demand

d. Consumer electronics based household DVRs

e. Television service provider based household DVRs

f. Sling Box style services

Next story loading loading..