An executive director is to be hired and two RFPs are to be issued -- one will relate to set-top data, and the second to measurement of cross-platform audiences.
What might a coalition or a committee achieve? Group efforts, by whatever name, are maligned for being slow-moving and ineffective. What lessons exist? Will seeking proposals and offering "seed" money lead to progress?
The RFP approach implies that somewhere a better idea is waiting to be discovered. An entrepreneur in hiding will be drawn onto the stage. Measuring media usage accurately is a complex and difficult task. In 1988, the television networks tried the same RFP approach. The effort was abandoned in the face of many naive responses from companies and universities that did not understand the business.
Instead, the Committee on Nationwide Television Audience Measurement commissioned one firm to build a better system. The industry spent $50 million over a decade to develop an alternative platform and gained the support of 30 sponsors. But perspectives changed, and in 1999, the industry also abandoned the Smart TV project.
To pay, simultaneously, for an existing service and for a national rollout of an alternative system had onerous balance sheet implications. Short-term thinking prevailed. The chief research officers' visions of long-term gains in data quality were not part of the CFO's calculations.
A desire for progress is necessary, but not sufficient to drive change. "Sweat equity" has been a component in previous committees. Faced with questions that are similar to those inherent in processing set-top box data, the industry developed standards for information reported, procedures and accuracy.
An Advertising Research Foundation Radio-Television Ratings Review Committee did the heavy lifting. In 1954, after "approximately 100 meetings" within a blue-ribbon panel of media, agency and advertiser participants, a 70-page report was issued. That report set the guiding principles for the following decades.
Similarly, the issues implicit in cross-platform measurement have been raised and addressed by the industry. In 1961, in response to the need for better yardsticks for determining how much to spend and how to allocate media expenditures, the ANA published a 114-page booklet, "Defining Advertising Goals for Measured Advertising Results," known by its acronym, DAGMAR. The fact that there were 10 printings of that report over four decades speaks to the value of this industry sweat equity.
The responsibility for cross-platform measurement is with the advertiser. It is accomplished by applying scientific principles of measurement. Every advertiser and every campaign is different. Over the past 50 years, there have been 20 efforts directed at generic single-source measurements. None have succeeded; none will succeed. Specific ad campaigns are to be designed with specific objectives; results must be measured and tracked across time. By so doing, knowledge is built. That is the message of DAGMAR.
Outside the U.S., where antitrust laws permit, Joint Industry Committees have directed media measurements for decades. CIMM is not a JIC. With a JIC, the industry sets measurement specifications, companies bid on providing the desired services and a contract for a designated period is awarded. The specifications are highly detailed and a challenge to produce. The industry sweat equity is in the specifications and monitoring adherence to the specifications.
This new U.S. coalition comes on top of another industry committee, the Council for Research Excellence. It was formed five years ago with a similar charter: to gain improvements. That committee, funded by Nielsen, is well-financed and populated with equally well-motivated, intelligent participants. Results from CRE to date seem to be of marginal relevance. CIMM's independence of Nielsen may bring greater credibility to efforts to pressure Nielsen to improve -- or will it? Nielsen is talking of becoming a CIMM participant.
What is unclear is the endgame. Assume that a better mousetrap is discovered. Wouldn't Nielsen simply buy a promising new player, and doesn't the industry end where it started -- as captive to the same monopolist? Is the industry not simply doing Nielsen's R&D job for it? Historically, Nielsen has profited from charging twice -- buy the service and then, to use the service, buy tabulations. Nielsen has ongoing R&D; so now will the industry pay twice again?
While the intent of CIMM is admirable, it is difficult to conceive of how an RFP process could yield the same good results as the cited industry landmark efforts. If sweat equity is a vital ingredient, where does it come from? Hundreds of people have been eliminated from network research departments. The agency research function, once the leading force, has become virtually nonexistent.
The advertisers are in a different state. They are not as detached from this scene as they sometimes seem. Many quietly track their advertising, but they do not talk publicly. Their participation in industry efforts has always been guarded. Advertisers want to protect and leverage knowledge gained, rather than passing the learning to competitors.
So will there be progress?
Happily, several companies are currently providing nascent set-top box data services. The hope is that alternative data sources and improvements will come by encouraging open competition.
On cross-platform, media can learn by working with advertisers in structured campaign-specific effectiveness measures and gain knowledge about general ad effectiveness. Allocating money for research as part of the media buy enables joint learning.
Yes, natural market forces will bring improvements. In the meantime, perhaps cursing the darkness is just part of the process. And forming coalitions or councils or committees reassures managements that something is being done -- it's only time and money.