Campaign Results Show: Online and Offline, We've Been Too Simplistic

In the sometimes adolescent debate between “traditional” advertisers and those that employ the new media, such as the Internet, an important universal learning has been ignored in the midst of the name-calling and boosterism. We have discovered that advertising performance – certainly online, and therefore likely offline – is much, much more complex, subtle and elusive than we advertisers have ever believed.

The old saw about only half of advertising being useful and just not knowing which half to keep is true only in principle. It turns out, it’s pretty far off in degree.

Over the course of many, many online campaigns, we’ve discovered that perhaps a third of the creative concepts really work the to degree that we’d like, and about the same proportion of initial media buys wind up making the cut. Combine those two dimensions together, and you wind up starting out with about 10% of your initial attempts proving out, depending on your level of scrutiny. This scrutiny tends to be very high in the online world, where the key measure is frequently return on investment and most of the numbers are there for all to see.



Even more than this Darwinian reduction to the very best placements, the ones that fail teach us lessons that allow us to get into the very minds of the audiences. In a recent campaign a client of mine ran both a top banner and a side “skyscraper” ad with the same concept – one that involved trying to make a particular woman in the ad look seem like a terrible person. The top banner showed just the head of the woman, as that was all we could fit. The skyscraper happened to also show her body. Men, it turns out, are simple creatures, and having showed them her body, they seemed incapable of considering her to be a bad person, preventing them from understanding the funny creative concept.

This sort of iterative feedback not only allows us to cull ads to the winners, but also to become much more intelligent about creating ads. It appears that advertisers have grossly underestimated the complexity and subtlety of how and why, when and where certain advertising works. Working in the data-intensive realm of online media gives us glimpses of this.

This indicates that there are so many complicated factors contributing to ad performance that we cannot simply list and measure them all. Brand studies merely show aggregate results after the entire ecosystem of influences take their toll. They’re sort of like the part of the newspaper that shows you a picture of what yesterday’s weather was like, instead of the chart that shows all the fronts and other influences likely to affect tomorrow’s. In the absence of greater consistency or the emergence of a grand unified theory of marketing performance, it seems that performance itself needs to be measured.

In the offline world, this has been done on a small scale (relative to media budget size), with test markets and brand studies done by the largest brands. But if what we’re learning about the marketing sciences online proves consistently true, offline advertisers should be spending a very large portion of their dollars on measuring more precisely the real effects of traditional ads.

Next story loading loading..