The one thing that seems to be agreed upon is that, 28 years into the experiment that is online advertising, these ads work.
There are literally thousands of studies that show the effectiveness of online ads, like one released earlier this week from IPG showing how online ads perform, and especially how interactive ads perform better than static ads.
People spend inordinate amounts of time online being exposed to these ads. They affect perception and influence behavior.
Still I find it interesting that when you dig deeper and test, the results are not always what you expect.
I’m an advertiser and a marketer, and I’ve always believed in a simple mantra of “run, test, adjust.” You create a hypothesis, test it out, and read the results. That influences your direction for moving forward.
Over the last 27 years I’ve tested every permutation of ads to see what worked. I’ve tested on display ads, emails, landing pages and more. Most recently my teams ran a test that showed their text emails working better than emails with graphics. We saw personalized emails and ads perform poorly when compared to general, non-personalized messages. Sometimes we see targeted ads perform poorly when compared to general, run-of-platform type ads.
We think context is important -- but sometimes it isn’t.
The moral of the story is, the “rules” are not always applicable. They are guidelines. Rules are benchmarks for performance, and you need to test in your own situation to find what might work best for you and your campaign. Never take your ideas for granted.
It’s not that easy to formulate tests. For a test to work correctly, you need to limit the variables. You need to structure your tests to maintain continuity on the elements you aren’t testing so you can be sure the test you intended is the test that influenced the results. Things like time of day, target audience, primary message, call-to-action, coloring, landing page content and more can all affect the outcome. Multiple variables dilute the efficacy of your test and make it difficult for you to act properly on the outcomes.
Testing is important because rules are made to be broken, and audiences are fickle. Entropy even affects the ecosystem of online advertising. What worked in the past doesn’t always work now. I remember once having a series of tests for an online ad campaign and the control consistently beat the newcomers for a period of almost a year. General thinking was that wear-out would occur, but it never really did. The old standard control was the one that always worked.
More recently, I had a similar issue where we kept testing video ads against one another on YouTube, and the control kept winning. We were simply generating different headlines on the same ad, and the control kept winning. Logic said it would have seen iterative change over time, but it never did.
The fact is, it’s good that the rules are broken. If the rules always applied, this whole industry would be super-boring. The lack of consistency is what opens the door to creativity.
You have to remember that your audience consists of human beings, and human beings are unpredictable. They respond differently to different prodding at different times. Online advertising works because of the creativity inherent in the space -- and that’s where you come in. If the rules always applied, robots could be doing your job. They can’t. This is your business still, and it should be for quite some time to come.