As researchers, we all know that managing to averages is no way to run a business. However, this is exactly what many of us do when it comes to managing one of the most basic aspects of
digital marketing: frequency. Using average frequency as a measure of delivery not only impacts campaign performance, but also leads to wasted impressions and negative brand impact.
The
main problem with using average frequency, rather than looking at a distribution, is that it hides the long-tail nature of ad exposure. If frequency was normally distributed, with an average of
10 and minimum of 1, we could assume that the max frequency would be in the neighborhood of 20. Unfortunately, impression frequency distributions tend to have a very, very long tail.
I’ve seen cases where a partner can have an average frequency of 10, but serve a significant portion of impressions at frequencies greater than 100. If that doesn’t sound too bad,
consider that retargeting partners often go even further. Retargeting partners can serve a single cookie ID over 1,000 impressions in a 30 day period. That’s over 30 ads per
day, and doesn’t include ad exposure across other non-retargeting partners. If a user isn’t converting after 30 ads per day, you should probably be focusing your marketing
somewhere else.
advertisement
advertisement
While the highest frequencies occur among retargeters, the issue also affects direct publishers and exchange partners as well. By now, most exchange partners
have realized how to take advantage of the last-click attribution model: cookie bombing. When all that matters is the last touch, it makes sense to hit as many people as you can, as often as you
can. In addition to taking credit for undeserved conversions, this strategy creates a massive amount of wasted impressions. Taken in conjunction with the lack of real-time viewability
metrics, this amounts to a considerable portion of media budgets that aren’t actually doing anything. Even for partners that don’t manage their inventory toward last touch, more
likely than not, frequency is still being monitored based on averages.
Because our media partners are often unwilling or unable to look at a frequency
distribution and actively manage it, this responsibility falls entirely on the agency analytics and media teams. As with any problem, the first step is determining the extent of the issue.
You can work with your ad server to generate a report that will show frequency distributions. In some cases, the standard report only covers a distribution up to 25 or 30. While this
isn’t ideal, you can still look at the percent of advertising above the given threshold. Anything greater than 3% indicates there may be a problem, and anything over 5% should be
alarming. Next, you should negotiate for make-goods from any egregious partners. If your partner relationships are like most, you probably have a verbal or written commitment from
the partner not to exceed a certain frequency cap. Any impressions served over this cap are up for negotiation. In campaigns I’ve worked on, we’ve negotiated tens of thousands
of dollars of make-good impressions.
The final step is to ensure that any frequency issue is resolved and managed going forward. As a best practice, frequency caps
should be explicitly stated in any I/O or partner agreement. There should also be a clause stating what the agency or advertiser is entitled to should the site serve at frequencies greater than
the cap. For DR clients that have implemented this, we’ve seen significant improvement in conversion rate and ROI. In addition to make-goods, this type of diligence on the part of
the agency more than pays for itself with improved performance and greater client trust.