Commentary

Research Differentials Frightening For Stations

Gary O'Halloran shook his head with disbelief when last May's Nielsen ratings came in for the Bismarck, North Dakota, market, where he runs the Fox station. For the month, Fox hit "24" averaged a microscopic .2 in the key 18-to-49 demo.

It wasn't the first time "24" had such a scant ratings tab. "Every time that would come out, I would just cringe," O'Halloran says. Bismarck is a "diary" market, where the Nielsen ratings rely on viewer cooperation to generate ratings. O'Halloran believes the tenuous method is leading to under-counting, which is hurting his business.

Also, he's interested in exploring set-top-box data as an alternative or a supplement. As president of the North Dakota Broadcasters Association, he plans to discuss the possibility with his fellow members next year.

O'Halloran is hardly the only station head eager to evaluate differences between Nielsen and STB data. A group of industry researchers may give him some guidance.

New research from the Collaborative Alliance Set Top Box Think Tank reveals wide swings between Nielsen and STB ratings from Rentrak. The Think Tank -- with members such as NBC Universal, Cablevision and the TVB -- evaluated three markets: Houston, Birmingham and Grand Rapids, Mich.

They were chosen, in part, because each uses a different Nielsen measurement method: a local people meter in Houston; a set meter in Birmingham; and diaries in Grand Rapids.

In Grand Rapids, the researchers evaluated a campaign that ran there last spring for a retail client of agency MPG. They compared results from Nielsen with Rentrak's. The differences were dramatic.

The MPG client purchased 103 15-second spots in all dayparts. Nielsen found the campaign generated household gross ratings of 408, while Rentrak reported 609.

Looking at the buy, the average household rating for newscasts -- national and local -- was a 3.9 according to Nielsen -- but a 7.0 on the Rentrak scale. That's a 79% differential.

"If there's that kind of variation, maybe that's a little frightening for (a station) selling inventory and hoping you can deliver the media plan and sell a product," says MPG executive Mitch Oscar, a Think Tank member.

Results in Houston and Birmingham showed other wide swings that could potentially augur great instability in the Nielsen ratings. Rentrak also had some issues on that front.

For example in Houston, using data from September measuring all stations' 5-5:30 p.m. newscasts on a given day, household ratings could go up or down between 15.3% and 23.9%, according to Nielsen data. Using Rentrak, the swings were between 4.3% and 5.6%.

The Think Tank members make it clear the purpose of their research isn't to determine the validity of Nielsen vs. Rentrak, but offer academic grist. "There's got be much more analysis, much broader testing, to see what we're getting out of this," says TVB chief Steve Lanzano.

STB data offers multiple questions about its sturdiness and why its results are so different from Nielsen's. The Rentrak sample size is much larger, something media researchers appreciate. In Grand Rapids, its base is 65,000-plus homes, while Nielsen draws from 1,130.

Yet, the Rentrak data is generated from Grand Rapids customers subscribing to AT&T U-verse and Dish Network. Homes receiving TV over-the-air or via cable -- but without a set-top box -- don't generate data. Rentrak says it adjusts for that as it reports results.

Nielsen, however, has always attempted to measure all homes, regardless of how they get service. It also offers demographic information Rentrak currently does not. Another issue Rentrak may face: The TV can be off, but not the box, so bogus viewing could be recorded.

Still, Nielsen has acknowledged an interest in STB data and has conducted its own test that has the industry waiting for results.

Jane Clarke, head of an industry measurement coalition, says recent research showed local broadcasters expressed STB enthusiasm. As compared to skeptical national networks, that's "where we heard the greatest need to improve the existing measurement -- particularly in the smaller, diary only markets."

But not all executives working with "diary" markets are chomping at the bit to skim STB data. Pat Liguori, who runs research for ABC's owned-and-operated stations, says she harbors some doubts about the stability of the data and some practical concerns.

STB data can bring a year's worth of second-by-second data, which might bring too massive a waterfall for a smaller station to process. Plus, she adds: "There's an expense involved that they have to consider." In the meantime, Liguori suggested STB data might only play a transitional role. Maybe a programming whiz in a garage will focus on TV ratings rather than the next killer social network?

"Who knows what's going to come out of Silicon Valley in the next year or so," she says.

1 comment about "Research Differentials Frightening For Stations".
Check to receive email when comments are posted.
  1. Eric Scheck from Cross Channel Digital, December 17, 2010 at 8:05 a.m.

    Taken together with the MRC's decision to discredit C3, Nielsen's issues run deep. Could it be the second sinking of the Bismarck at hand?

Next story loading loading..