In Metrics Insider, Performance Insider, and other digital marketing media, I'm reading a lot about the measurement debate: people talk about attribution, engagement, impressions, even ancient concepts like OTS, CPM, even GRP.
More than a few years ago (1994) Giep Franzen wrote a book called Advertising Effectiveness. His analyses of TV commercials and print ads led me to wonder whether there’s a parallel between his last-millennium media research and the issues facing digital advertisers today. (And yes, you can try these at home!)
Franzen analyzed full-page, full-color ads appearing in women’s magazines. He combined the results of several research methods: eye-tracking, surveys, and “through-the-book” tests like Starch. Are you sitting comfortably, magazine in hand? Counting down from 100%:
advertisement
advertisement
Now put the magazine down and give your neurons a workout:
Franzen also looked at average scores for 30-second TV commercials, using ASI data and people meters. Again counting down from 100%:
This leaves 23% of TV viewers who can remember seeing the spot and who can also name the correct brand.
We still haven’t measured persuasion, liking, loyalty, or whatever objective the advertising is intended to achieve; these metrics are likely lower than 23% to 25% maximum correct recall. (Before your slings and arrows start flying, I have to emphasize these are results of single-exposure tests in each medium. A full campaign, in multiple media, should yield better numbers.)
Returning to the present day: Have any analyses of digital advertising combined time on page/screen, eye movement, unaided and aided brand linkage and content recall? Might old media experience be a guide to understanding new media effectiveness?
Great article Gray. But as you point out this was single exposure and I am yet to see the 'single-spot TV buy' as a total campaign. So if we 'extend' the TV analysis of 23% who saw and named the brand from a single spot, using 'random duplication' the logic flows like this.
After one spot 23% have seen and named the brand leaving 77% yet to be 'persuaded'. With the second spot if the 'conversion rate' is maintained, 23% of that remaining 77% will be 'persuaded' i.e. a further 18%. This means that after 2 spots 41% should be able to name the brand. The third spot has 59% yet to be 'persuaded' and if it converts 23% that is another 14% - and we're up to 55% on the positive side of the ledger. The fourth spot takes it to 65%, the fifth spot to 73% and so on. Of course the 'persuasion curve' is not linear and it declines as does the duplication, but this is the way that a campaign achieves accumulated conversion.
Gray, I think there definitely IS a parallel between his last-millennium media research and the issues facing digital advertisers today.
In both cases, the people involved have made the fundamental mistake of taking a linear, mechanistic approach to persuasion. The illusion is that if we can only dissect a frog into finer and finer parts, we can get it to leap higher. The problem is that when we dissect it in this way, it is no longer a frog. It is a sloppy collection of dead atoms; an ex-frog.
I don't believe we can solve the problem of digital ad effectiveness by slicing the frog finer or adding more rows to our spreadsheets.
I firmly *do* believe we should abandon the idea of digital exceptionalism and use the same metrics as we would for any other medium.
For brand efforts, we should use brand-building metrics. For direct marketing efforts, we should use direct marketing metrics.
We now have nearly 20 years of data that suggests that the more columns we have added to our spreadsheets the worse our results have become.
If we hope to make progress, we have to stop working so damned hard at it :-)