Although these pieces typically appear on the business pages, they are not truly business stories. The overnight ratings might tell us something about the financial success of a show, but it's the C3 ratings (which measure commercial viewing after three days of DVR playback) that are used to transact business, not the overnights.
Nor, in a world where 40% of households have a DVR, do the overnight numbers tell us the true popularity of a show. Indeed, a show like Fox's "Fringe"can increase its viewership by another two-thirds after seven days of DVR playback, while the Live-Plus-Seven ratings for shows like "The Event," "The Office," "Modern Family" and "Glee, "can be 50% higher than Live-Only ratings.
Of course it would be impractical in the extreme to insist that the media not produce ratings stories until the C3 or Live-Plus-Seven numbers are out. By that time, readers would no longer care; naturally they want to know how a show performed while it's still fresh in their memories. So although overnights are an imperfect gauge, we are stuck with them. But that doesn't mean the media can't do a better job interpreting them.
First, some background. Although Nielsen produces almost every TV ratings number you read in the paper or online, Nielsen itself does not usually push out ratings for a particular show. It does make a weekly ranking available to the media, and on rare occasions it will proactively release numbers for a show of special importance, such as The Super Bowl. But aside from that, it is the networks themselves who publicize the ratings. Every major media company has a PR machine that churns out press releases, media alerts and blog posts on the success of their own shows, typically focusing on the demographic group in which it performed most impressively. In other words, the media and networks have a symbiotic relationship in which reporters try to write legitimate cultural and business stories, while the networks are trying to gain bragging rights and validation for their programming.
Fair enough. Unfortunately, though, there is no generally accepted standard that reporters use in their reporting. The numbers need to be accurate, but beyond that, which numbers are used and how they are interpreted are up for grabs. With all that in mind, then, here are some thoughts on how TV ratings stories can make more sense.
1) Use P2+ numbers. If the point of the story is to describe the overall popularity of a program, then include all viewers age 2 and above. Frequently networks release ratings for 18-49 year-olds. This seems to be a vestige of bygone days when that was the prime target audience. But since the Baby Boomers have almost all moved out of that demographic group, this is an archaic measuring stick. And if you are writing an actual business story, where the purpose is to discuss the business (rather than cultural) implications of the ratings, you should stick with C3 ratings anyway, since they are the true currency.
2) Put the numbers in context of timeshifted viewing. Live-Same-Day numbers are pretty close to the final figures for some programming, such as sports and news, while they are way off for others, especially scripted programs with devoted followers. Why not spell this out? It's well-known, for example, that the Office has consistently gotten a huge DVR audience or that reality shows don't generate as much DVR playback as comedies and dramas - this could be part of the context when reporting on specific episodes.
3) Be more honest about sourcing. Few reporters disclose where they got their information and some don't even mention that they are using Nielsen numbers. On most news topics, a respectable news outlet will disclose where data comes from so the reader can evaluate the motivations of those who have released it. Yet to read most reporting of TV ratings, you would think these numbers have fallen out of the sky.
4) Use total viewing numbers (i.e., the number of people who watched the show) rather than the actual rating (which is the percentage of people viewing.) It's a lot easier to say that 15 million people watched a show than to say that it got a 5.0 rating and then explain what a rating is. A rating does become necessary when comparing contemporary performance to previous decades, when the total audience was lower, but it's more trouble than it's worth to use a rating (or God forbid, a share) in a basic viewing story.
5) Be clearer on cable performance. It's very hard to get a true picture of cable performance because the same episode of a cable show is frequently broadcast 3-4 times in the same 24-hour period. To fully understand a cable show's popularity, I think it makes sense to add up all the viewing of a particular episode (e.g., the viewing for the shows at 10 p.m., 1:00 a.m. and 6:00 p.m. the next day) and use that as the yardstick. But if you don't want to do that, then at least make it clear what time period you are describing. The recent coverage of The Daily Show beating Leno and Letterman, for example, was fuzzy on whether the Daily Show performance was just for the 11:00 p.m. show or for all of the subsequent repeats combined.
Of course, it's not up to the media alone to improve the clarity of their reporting. It is also incumbent on the networks that release the numbers to rethink what it is they are issuing and how they explain the full context. Still, it is the media's decision whether or not to run a story without asking questions or clarifications, and the media can force changes if it wants to. The impact of DVR viewing and online viewing is only going to grow, which will continue to diminish the relevance of overnights. It's time that editors, reporters and the TV programmers have a larger discussion about how to describe TV viewing trends.