Are Poorly Performing Sites/Placements Really So Bad?

Part of media optimization in a digital world should be to plan and optimize the scale of media. Unfortunately scale is often one of the most overlooked items in the process.

To take site/placement optimization for instance, as soon as a campaign is in flight, media analysts and planners are poring over performance metrics to get a sense of which sites/placements have been working and which haven't. The hope is that as soon as we know what the bad guys are, we will be able to weed them out by putting them on chopping board.

This all sounds like a perfect game plan until we are hit by a more practical issue: How bad is really bad? Say we are looking at 50 placements with response rates ranging from as high as 0.01% to a dismal 0%. The 0%s are no-brainers, since they are as bad as it can go. However, we are very possibly facing quite a number of placements that are lower middle of the road. Is something like 0.005% bad? Well, it is pretty bad considering that it performs only half as well as the top performer. But does that by itself warrant its being cut? If so, what about a response rate of 0.0051%? What exactly is the threshold we should use to make that call?



Before we can make a determination on who is good and who is bad, what we really want to know is how well the alternative might perform. To compare lower-middle-of-the-road performing placements to the higher performing ones within the existing roster, and then conclude that lower performing ones are simply persona non grata, is simply not the right way to do this (unless to cut the level of delivery is an option). The real benchmark one should use is to figure out whether the alternative is going to bring in better performance than the "bad" performers down the road. And yes, scalability is one of the key secret ingredients in this process.

With that in mind, if we decide to shift impressions from the few "bad" sites/placements to the few "good" ones on the roster, we should be aware that by doing so, the good ones may be turning bad -- and some of them would even become worse than those we just cut out due to limited scale the good ones may face. For almost every site/placement, scalability dictates that the relationship between the number of impressions we deliver and the number of responses we are getting from those impressions is not a linear one. It almost always follows a curvilinear pattern, with each additional exposure bringing in a smaller incremental response than the previous one. To put it in another way, the more impressions we serve on a particular site/placement, the more saturated it would become and the less responsive it would become. Therefore, it's key to understand and to quantify the curvilinear relationship (i.e. the scale of the site/placement) and subsequently to estimate at what point the additional impressions are no longer warranted.

We of course can choose to shift impressions from the "bad" ones to brand new ones. In this case, we are facing the limitation of not knowing how the new placements would perform against the ones we are supposed to kill. One obvious solution is to add new placements without killing the existing ones in its entirety. In other words, we can put the "bad" ones on watch list and shift some volume out of them to test new ones. The only thing we need to be extremely careful about is that when we are comparing the performance of the new ones against the existing 'bad' ones, we ought NOT to be looking at the cumulative performance from the inception of the campaign. The problem with comparing the performance from day one is again ignoring the scale of the placements. The "bad" ones, since they have been running longer than the newly picked ones, are more likely to be less scalable. So what we ought to be doing is to measure their performance using the same starting point (i.e. the day the new placements were added).

All in all, good media analytics/optimization is not just to look at what happened in the past as it is but to use what happened in the past to look into what might happen in the future. And in digital media the future is almost always dependent upon the scalability of our buy. Unfortunately scale is something that deserves a lot of attention -- but hasn't quite got it so far.

2 comments about "Are Poorly Performing Sites/Placements Really So Bad? ".
Check to receive email when comments are posted.
  1. Xuhui Shao from Turn, Inc., March 18, 2011 at 3:47 p.m.

    Very well said, Chen! It gives people an appreciation of the level of difficulty involved in making media decisions. To make things worse, other factors also affect site/placement performance: time/day/user/page. One also needs to model variance when comparing two numbers of means.
    And then to top it off, you may also consider the current last-touch attribution model might be flawed and bias your numbers.
    All in all, I'm of the opinion that we're better off executing these trade-offs algorithmically in real-time.

  2. Mark Hughes from C3 Metrics, March 23, 2011 at 10:55 a.m.

    And if you're not using a robust yet simple attribution solution to last view/last'll never know if those placements originated or assisted conversion--thereby cutting media that actually drove impact.

Next story loading loading..