The obvious goal is to define a set of standards so media auditing firms are responsible in their duties — from use of research to final recommendations to the client. The issue at hand: This approach is truly the tail wagging the dog:
No standards in place for third-party media-buying platforms on how media research data is crunched, thus allowing for possible distortions of industry data ... which just happens to be MRC accredited. Many do not understand the role that third-party processors play in the audience measurement and actualization process — or the underlying inconsistencies in the way each approaches this task.
All media research providers should define standards for processing
their respective data and even go to the extent of insisting upon auditing of compliance to their respective standards.
Many buyers don’t understand the math behind their audience estimates and just take what is served up by the third-party platform. We’ve seen buyer estimates in spot TV where the audience estimate is 3x that of the prior four-book trend, with no change in the programming estimated, as well as in lead-in/lead-out programming.
If we as an industry
are concerned about responsible use of media research, then buyers’ estimates need to be more in line, based on what the data would project. This requires better transparency from the business
rules inherent in each third-party processor, as well as agency-buying organizations taking more accountability for the information they publish to clients.
Standards for audience estimating should be finalized by the 4As and their member agencies. With empirical standards for estimating, it then allows for better alignment when an agency is audited. In the end, the audit is focusing on the media agency’s ability to estimate audience, not the station.
It’s critical that up to this point in the ecosystem, the processes be empirically driven and documented.
What is still amazing is that these estimates, in many cases, are not research-based, but are
routinely accepted by the station. If a buyer estimates a 10.0 and it routinely does a 3.0, why would the station accept the order? We hear from buyers that “it’s guaranteed,” but
when it doesn’t deliver, that doesn’t help the client in delivering their communication goals for that campaign.
In the end, it’s the stations inventory that buyers are assigning ratings estimates against. Stations, you own the inventory and should own the estimates. If the buyer provided estimates that are not based in reality, then the station should push back.
Let the agency buyer know that you aren’t going to accept and guarantee artificially inflated estimates.
So where do we go from here?
In the end, we have a system where third-party processors have no consistent standards, agencies do not fully understand what their platforms are doing to generate ratings estimates, and stations don’t, either. (Yet they go ahead and accept them.)
Does this sound ideal to anyone?
The top priority should be extending the MRC’s reach for this exercise to more than just media auditing. Media-research providers and third-party platforms that crunch the media-research data should all be held to a defined industry standard. It does no good for agencies or stations to refute the numbers generated in an audit if there has not been due diligence to assure clarity in expectations up to that point.
Buyers should also have a measurable standard, based on empirical data. This approach ensures that from start to finish, what the buyers are doing will be properly and consistently held accountable by the media audit firm.
Stations, step up. As previously communicated, it’s your inventory. You should own this discussion.