Commentary

Web 'Standards' Are Elusive Goal

So I just opened up an IE tab using an extension in Firefox, cleared my cookies, turned Javascript off, then reset my modem. I was looking at a page with Ajax elements on it, and then let my wife do some shopping online. I'm pretty sure that we've created about fourteen data points that will be recognized by a comScore or Hitwise as either four or seven users.

You get the point. With the recent demand by the IAB that comScore and Nielsen NetRatings audit their processes, the Web metrics business takes yet another hit.

Every day it gets harder and harder to accept the fact the Internet economy runs on such awkward dynamics: Various vendors offering various numbers accrued using various methodologies.

If the medical field ran this way, many more of us would be dead or encountering treatments with success rates reminiscent of the Middle Ages. The horror scenarios at NASA would be endless: "Carl says that bolt has to withstand four tons of pressure, but it really depends on what he means by 'pressure'. Or by 'four'"

Standards are great -- when they're standard. For many online businesses, dollars earned are the only true measurement of success or failure, while stats honestly provide little more than guidance to overarching trends.

When your numbers are hampered by issues like cookie behavior, caches, proxies, spiders, Ajax, Flash, Javascript, and the simple difference between log analysis, representative panels and page-based triggers, it's pretty sad that we're waiting quietly for someone to work it out while we keep pouring billions of dollars into the machine.

You simply cannot base business decisions about, say, usability, when 3% can make the go/no-go difference. When you realize that your margin of error is 7%, it eliminates all but the most massive -- and rare -- swings in numbers. And the biggest challenge, of course, is for marketers for whom these systems drive advertising compensation as well.

Finally, we have the moral dilemma -- whether for a news article, a conference call, a price quote, or marketing material, a company can technically choose the most beneficial metrics. One week, you're the 64th most popular site on Hitwise, while a month later you have the 48th greatest reach according to Alexa.

The solution? Let's acknowledge that bad stats are potentially more dangerous than no stats. Inaccurate numbers can cost someone -- the site or its advertisers -- money they don't know they are losing.

Instead, the industry has to accept that individual advertiser-side metrics based on performance are the only way to make pragmatic decisions.

It's simple. If I spend $5 and earn $5.25, it's not like earning $6. And I can measure this using initial, micro-campaigns linked to advertiser-based conversions. Yes, there's a cost and it takes more logistical effort.

But with infrastructure tweaks designed to implement these tests more quickly, an advertiser can decide whether or not this site is the place for a full-blown ad spend. When an advertiser or agency needs to choose between three sites for an ad buy, they'd ask each for a modest batch of free impressions (considered an ad sales expense for the publisher) and choose the winner from the best total revenue driven by those samples.

In the end, the cost of measuring accuracy in a substantial campaign will be offset by the knowledge that every dollar is well-spent. If it's a branding effort, then yes -- you're starting from a difficult position. Like physical billboards on the side of the highway, general demographics must remain your guide.

Just as purchasing a car includes endless subjective considerations that make a customer or professional review non-universal (like solidness, pickup, turning), so too must we accept that sites, metrics vendors and advertisers use different methods to track activity -- and always will.

It's time to bite the bullet and accept the extra costs in managing this diversity rather than holding out for the "Standards" standard.

Next story loading loading..