I will provide observations on a few of John's points here, but encourage you to get the full story directly from his post.
The Google Ad Planner was announced with much fanfare at the Advertising Research Foundation's Audience Measurement Symposium 3.0 conference in June. Consumers of audience measurement data generally expect a reasonably high degree of transparency and disclosure with respect to methods, but the Google methodology remains something of a black box, whether we're talking about the source of data collection, the nature of the data (person or machine, server side or client side) or the demographics. Perhaps surprisingly, given Google's access to large volumes of site-centric data via the Google Analytics program, the Google Ad Planner projections for Web site Unique Visitors run, on average, about 56% of those reported by comScore's Media Metrix. While some people hoped that bigger samples translate into larger audience numbers that are closer to server-side estimates, this comparison shows that this is clearly not the case.
In the ongoing panel-versus-server debate, where publishers are sometimes vexed that panel-centric reach estimates (based on unduplicated persons) are significantly lower than site-centric reach estimates (based on unique unduplicated cookies), the Google estimates have struck many as a puzzlement. Some have even suggested that the Google Ad Planner is not so much an audience measurement product as it is a planning tool infused with data, designed for helping smaller agencies buy display advertising and increase dollars spent on the Google Content Network.
Quantcast offers the promise of "hybrid" measurement, incorporating site-centric beaconed data along with "panel" data. (I put panel in quotes because it isn't entirely clear what the panel is. There are indications that the panel is in fact machine-level data like ISP data, which I do not consider to be panel data, since machine-level data does not allow you to know which individual in a household is using the computer at any point in time.) In our analysis, we found, as John's piece notes, in aggregate Quantcast's projections are on average about 97% of the comScore projections. Of course at a granular level (i.e. for a specific site), significant differences emerge. The important point is that the implied promise of the Quantcast "hybrid" technique -- audience counts matching server counts -- has not, in fact, materialized. The site-centric estimates of unique audience that run two to three times higher than panel-centric measurement remain, still, the outliers.
Does this mean comScore is dead set against the "hybrid" approach? Not at all.
In June, we introduced an enhanced methodology for our Video Metrix service, beginning with panel measurement for person behavior; and adding a tagging and beaconing component. We do this for video in order to account for the many technical differences among measuring video content as compared to traditional page-oriented content, specifically, including (1), identification of multiple video file formats; (2), reconciliation of non-standardized methods of content delivery and distribution; and (3), identification and classification of the specific content viewed.
But Video Metrix is a different kind of hybrid: it is a panel-centric hybrid. There is no question that machine data and site-centric beaconed data can inform, enrich, and improve audience measurement systems for certain applications. But as marketers, as publishers, and as researchers, we must keep the person -- the shopper, the buyer, the visitor -- at the center of the model. That is why I believe in panel-centric measurement as the cornerstone that delivers measurement of the individual -- even when that measurement is a panel-centric hybrid.