At a time when the ad industry’s dependence on data analytics and measurement technologies appears to be growing, the vast majority of advertisers and agency executives say they have little faith in the metrics they use to plan and buy media.
Only a third (33%) of ad execs say they consider their audience insights “completely trustworthy,” according to a survey of 197 advertiser and agency decision-makers conducted online by Advertiser Perceptions (AP). The study, “The State of Advertising Measurement Report,” is intended to serve as a benchmark of industry confidence in the data and measurement tools used to make media-buying decisions, which will be updated periodically -- most likely once a year
When AP drilled deeper, asking ad executives how trustworthy they felt their “audience analytics/measurement” currently are, their confidence levels were even lower. Only 29% said they feel these currently are “completely trustworthy.”
Interestingly, only a third (34%) said they felt Nielsen’s “C3” and “C7” audience ratings -- currently deemed a “gold standard” of audience measurement by many in the industry -- are “completely trustworthy.”
The AP report does not break out the responses of advertisers versus agency executives, but AP Vice President-Intelligence Justin Fromm says it was “pretty consistent all the way through.”
“What we see really is the fact that as we move toward people-based advertising, there’s no question that the need for accurate measurement grows exponentially,” he said, referring to the “people-based" concept that seeks to associate actual people exposed to advertising and media buys by correlating the “first-party” data that brands and agencies can access about them to the media they buy to reach them.
Interestingly, ad executives said they were equally dubious about the accuracy of the first-party data they use to power people-based marketing and media buys. Only 32% said this data is “very accurate.”
“What we’re seeing is that if advertisers don’t trust the data they have in hand, it’s not easy for them to make those decisions well,” Fromm says, adding that the lack of confidence may be stymying the industry’s adoption of more sophisticated audience analytics techniques and targeting.
Fromm acknowledged that the trustworthiness of audience measurement and analytics is not a new issue, predating the current concerns that have surfaced recently about the accountability of digital audience estimates. Still, this report does not seek to provide a “longitudinal” view looking back over time, he notes.
“There have always been questions,” he says, adding: “This is intended to be a benchmark to look at things going forward.”
Who are these 33%? Audience Insights completely trustworthy...per Webster the definition of trustworthy is "worthy of confidence". What audience insights are completely worth of confidence?
Joe, his is not very surprising as very few "advertising" execs have the faintest idea about what the various audience surveys do, how they obtain their data, wahat it really means, etc. My problem with this study is that it bundles everything together as if it's really one large all-encompassing subject---when it isn't. Assuming that the study questionned only those who actually use the surveys, not folks in general, you would get quite a different answer if the subject was digital audiences as compared to those measured for TV, radio and most print media. In the case of "linear TV", actual users of the data---while raising questions about delayed viewing or digital add-on viewing---probably have few reservations about the general accuracy of the findings. The same is probably true for radio and magazines---albeit with some issues that might be raised like low sample sizes for spot radio measurements. But digitla media is quite another kettle of fish. It would have been far more revealing if properly qualified data users for each medium were posed these questions, individually for each medium.
@Tracey Scheppach: Are you questioning the trustworhiness of the 33% measurement? Just reporting what AP found. They did not disclose who those people are. I'm guessing you wouldn't include yourself among them.
I am question the knowledge level of the 33%. I am shocked that anyone that had any clue of the state of the state of audience insights could say they are completely trustworthy. There is so much work to be done here!
You can see our Advertising Week presentation / panel discussion, and download the full report at https://www.advertiserperceptions.com/2017advertisingweek/
Amen
With all its flaws, this study remains a stunning indictment of the mixed state of measurement and a shameful reflection of the weak state of knowledge among research users ... and purveyors.
This reader is reminded of MediaPost's Gavin O'Malley's recent report on Facebook's audience estimates exceeding US Census population estimates for certain demographics.
I guess things would be worse if the industry was free of all reservations. And putting one's head in the sand won't work. The cat's out of the bag. When it comes to Audience Data & Measurement, we are rapidly approaching a new era of media, particularly digital, defined by hustlers and hucksters.
Interesting reactions. I'm stuck by @Nicholas' comment - "I guess things would be worse if the industry was free of all reservations. And putting one's head in the sand won't work. The cat's out of the bag. When it comes to Audience Data & Measurement, we are rapidly approaching a new era of media, particularly digital, defined by hustlers and hucksters." Made me think it might be time to revisit the rallying cry then JWT exec David Marans made several times at the ANA TV Forum that advertisers pool a small exise fee of a couple of percentage points of their media buys to build a more perfect measurement system that is based on their needs and confidence levels. Maybe it's that time.
While I have faulted the traditional media industry for their historical short-sightedness and studied ignorance when it comes to audience research quality, I have also faulted the advertiser/agency segments of the marketing research industry (that includes media measurement) in their quest for a free lunch when it comes to audience research quality.
The entire advertiser/agency community had a chance to support the CONTAM initiative called SMART that produced a scale model of a research system in the Philadelphia TV market that produced an accurate, reliable and useful TV audience measurement (both linear and digital).d While in the end it was the parsimonious TV networks that sacrificed their SMART investments and killed SMART for no good reason, not a single advertiser or agency offered to step into the breach. I "loved" David & JWT, but there is a profound difference between a research director's rallying cry and an agency/advertiser financial commitment. And CONTAM no longer exists.
Fast forward to today. Social media has turned the question of quality into a mess. There is little doubt that that traditional electronic media are paying Nielsen a small fortune to measure traditional radio and TV. But how do digital media like Facebook get away with audience estimates that exceed US Census Bureau population estimates.
All of which brings me to the MRC ... the Media Rating Council. I appreciate the need for secrecy when it comes to proprietary methods and technologies. What I do not appreciate is the lack of transparency that permits problems to fester. A great psychologist once advised me that "we are only as sick as our secrets." The media research industry has a sickness that is made worse by the secrecy, stupidity, and silliness of data users.
Thank you MediaPost for exposing critical issues, like audience research quality, to the light of day. Advertisers, agencies, media and research are all in need of the fresh air of the fourth estate if critical progress is to be made.
It's almost 2020. Do we see, do we understand, advertising and media better than we did in 1970? With the exception of Ed Papazian and Joe Mandese, I don't think so!
@Nicholas: Thank you for recollecting the industry's lack of support for Gale Metzger's pioneering SMART TV initiative, including the concept of a UPC code for media (if there was every a time the ad biz needed that, it is now when a few behemoths are controlling so much information behind their walled gardens). I also think a certain behemoth of that era put some pressure on industry stakeholders behind the scene, which may have had something to do with it. So maybe it's naive to think that advertisers would step forward, combine their resources and pony up for a better way of measuring their own ROI today. But one way or another, they are paying for it.
Dear Joe,
As Chairperson of CONTAM (i.e., The Committee On Nationwide Television Audience Measurement established after the 1963 Congressional Harris Hearings like the MRC)
from 1989 to 1999, I have a unique perspective on the issues at hand.
It is likely that when it comes to CONTAM and SMART we have a situation of entities being shot in the foot and stabbed in the back. While I cannot produce the gun or the knife, I saw the blood drain and the money flow. Now, as a good researcher one ought not mistake correlation for causation. However, I know what I know and the evidence is clear. And I just find it a little curious that CONTAM ceased to exist once I left NBC and the Chair of CONTAM.
In addition to identifying Gale Metzger's unique contribution historical contribution to CONTAM and SMART, let us not forget those of his SRI partner, the late distinguished statistical and business professor Dr. Gerald Glasser. Let us also not overlook the astute, fervent advocacy of the late media guru Erwin Ephron. My apologies to those who also deserve public credit who I have not spotlighted. It was a privilege to work with you as we sought to understand and improve TV audience measurement and reporting.
Sincerely,
Nick
Nicholas P. Schiavone