A GRP in the traditional sense is still very important. We need to come to a consensus now on the definition of an online GRP and how it can be related to a traditional GRP. We need to correctly offer up a definition of reach and frequency as the measures of total audience, not just the audience represented by a site. I would also argue that if we can finally define an online GRP, then we can also define an online Cost Per Point, and compare online to television, print and other forms of media that pull higher spending levels. I have a feeling that if we can do our apples-to-apples comparison, we will finally prove that online is not only a more interactive medium to raise brand awareness, but that it is also more efficient for reaching your target audience with a specific message.
Why is it taking so long? I think the biggest hurdle to addressing the GRP is the element of duplication. Duplication among sites is the only real roadblock that holds us back when we analyze the effective reach among various sites on a campaign recommendation. It is actually very easy, utilizing the tools at our disposal, to estimate the size of a total audience that is online and measure this against the total US audience for that segment. This defines the potential reach of the online medium into that target audience. We can then factor in the composition of individual sites for a target audience, but the hurdle comes from the duplication among multiple sites in a recommendation. If we focused our dollars on just one site, a single GRP for that recommendation is actually easy to arrive, and therefore so is a Cost-Per-Point for that placement. If you spent your dollars on one site you would probably see that the CPP for reaching that segment of your audience would be more efficient than other media, but we still cannot get to the final number due to duplication as we know that no-one spends their money on just one site.
There is more than enough data available from analyzing the data housed within the major ad-servers to come to some estimates of duplication among sites within categories, so why have we not analyzed this data yet? What’s holding us up from at least coming to some sort of industry standard for duplication within categories of sites?
For example… a hypothesis is that the community of music fans is a loyal bunch and of the top 5-6 music sites, the core audience probably visits 2-3 regularly, with a high duplication among those sites. Conversely, they probably visit only 1 of the 5-6 major portals on a regular basis (we are seeing that users are rather loyal to their portals). Therefore, to get the most efficient use of your dollars, a targeted advertiser would spend their dollars on 3 of the core sites and 1 of the major portals. Additional budgets and goals for higher frequency may result in a broader list of sites being included in the campaign to reach the audience in multiple environments.
This hypothesis is easily proven if we had access to the duplication, or at least an estimate of the duplication among sites for that category. You can inquire to each of the sites within your plan, but the numbers you receive are not necessarily from an objective third party and they are very rarely the same.
The issue of duplication seems to be the one hurdle to addressing the GRP debate. Would it benefit the entire industry if the leading ad-servers would work together for once to publish some of these numbers? I think it would. The research companies have been quiet as of late and the various industry organizations have not really led the charge, so maybe it’s time for the actual soldiers in this battle for ad dollars to put down their differences and assist in the growth of the industry?
Maybe I am being a bit optimistic, but I think that it can be done.