In the 1960s, Marshall McLuhan dubbed the new electronic media world of television a “global village.” For the first time, people at opposite ends of the country were able to
simultaneously see and hear live events as they were happening.
Fifty years later, people can still get the same information at the same time, but they no longer have to access it at the
same time, on the same platform, or even on the same device. Over the past five decades, television has undergone several fundamental changes, affecting not only what is available to view, but also
when, where, and how it can be viewed.
But most of the changes that make measuring audiences more complex happened during the last 10 years or so. Until the early 2000s, the pace
of change was relatively slow, enabling cumbersome media measurement conglomerates to trudge along with serviceable audience measurement. As cohesive groups of people aged, their media habits
were largely predictable.
advertisement
advertisement
It’s hard to imagine now, but before people meters debuted in 1987, the national Nielsen television sample was only 1,200 homes, and demographic data was
only available 36 weeks out of the year. In a three-network, one-screen, 15-channel world, this was fine.
VCRs started to become prevalent during the late 1980s, and were eventually
owned by more than 90% of TV households. Watching prerecorded videocassettes became a major new use of the television set, and resulted in the broadcast networks giving up on original scripted
programming Saturday night (which was now movie rental night). This marked the first time that Nielsen had to admit it couldn’t measure something, as VCR playback was beyond its scope.
At
the same time, as cable television expanded, the number of channels available to the average home started to rise. But in 1990, the average home could still only receive 33 channels. Viewing
habits were relatively stable, and didn’t present too many major challenges to audience measurement.
Eventually, most people got (more or less) the same access to everything (the
exception being premium cable). The slow pace of change made predicting consumer media habits relatively simple, and slow-to-change research companies didn’t have to pivot too quickly, nor
innovate too often.
Since then, change has been rapid and constant, starting with the introduction of the first video iPod in 2005, Facebook and Twitter in 2006, the first iPhone, as well as
Netflix and Hulu in 2007/08, the first iPad in 2010, Netflix’s first original scripted series in 2013 (now common on Netflix, Hulu, and Amazon Prime), and other subscription video-on-demand
services such as CBS All Access in 2015/16. Today, the average home can receive more than 200 channels.
Right now, half of TV homes have at least one DVR, but half have none. Half
have subscription VOD, but half do not. About one-quarter of homes have multimedia devices or enabled smart TVs (how much further these will penetrate the marketplace is anyone’s
guess). Everybody doesn’t get everything anymore. Consumers in the same demographic segments have substantially different access to video content — and, consequently,
substantially different media habits.
Since both video viewing and media device usage is so splintered, it’s more important than ever for research and audience measurement to keep
up. Equally important, though, is understanding what research companies are actually doing, rather than being fooled by labeling. C3, deceptively labeled as commercial minute ratings, does
not measure either commercials or fast-forwarding through commercials. But the press has taken to calling it commercial ratings, so people tend to start believing it. “Total Content
Ratings” are not really going to measure total video content. While single-source measurement has always been an industry ideal, I’m not sure in today’s media world, one
company can really measure total content.