Commentary

The 5 S's Of Data

At the recent  American Association of Advertising Agencies (4A’s) Data Summit, Rishad Tobaccowala of Publicis Groupe referred to the current state of digital marketing data as "crap optimized on crap." And while there's no doubt that there's plenty of crap out there, not all data is created equal. This might seem obvious, but that reality isn't necessarily reflected in all agencies' approaches to selecting their data partners.

In most cases, agencies simply don't dedicate the time and energy required to properly verify the partners with whom they’re working. They accept scale over accuracy. They accept delivery over efficacy. Why is no one looking under the bonnet when it comes to their data sources? Simply put: They are afraid of the answers.

The current system of buying and selling data devalues quality. And in the long term, this is wholly unsustainable. Agencies need to work harder to extract value from data and to differentiate the good from the bad -- and the bad from the down-right ugly. For forward-thinking agencies ready to start ensuring that they're investing in quality data, there are five key considerations for assessing data quality:

Source. Agencies must understand where the data they are buying originated. Potential data sources are almost too great to count – publisher data collection, survey data, CRM data sets, offline sources. Where your data originated speaks volumes about its validity and utility, and yet many agencies fail to ask this very basic question regarding source.

Segmentation. To gauge data quality, agencies must also understand how it was segmented. There are, of course, several ways data can be segmented. Some segmentation relies on modeling and predictive analysis. Others rely on user actions, both passive and deliberate.

Recency and frequency are also important in segmentation. When did someone take an action and how often? Is someone who went to a travel Web site once in 90 days really "in market" for travel? Wouldn't people who went to a travel site three or four times in 10 days be a better "in market" segment?

Scale. In evaluating data quality, agencies must also consider whether the information can scale. In other words, how actionable is the data within a specific environment?

Scalability often becomes an issue when marketers want to target segments beyond the U.S. Many vendors' data coverage drops off dramatically outside of the U.S., and extrapolating data gathered elsewhere to new countries can pose serious problems, particularly in markets such as APAC, MENA and Eastern Europe, where data scale and density tends to be lower.

Standards. Agencies must hold their data vendors to high standards. However, data standards are difficult to define and maintain. Most can't even agree on a definition for data "accuracy," which is a natural derivative of the processes underpinning data collection and segmentation.

Data standards evaluate the accuracy of data, but do so within the broader context of the campaign type, goal and KPI. In the real world of media execution, it isn’t simply enough to have an accurate segment that is true to form. That segment must also be scalable, cost-effective, and deliver against the campaign goal efficiently. Often, accuracy is overlooked if any of the above factors combine to a pleasing outcome, and is only questioned when they do not.

Specificity. Often data vendors release only a fraction of their data capabilities, and can unlock and combine a vast amount of new potential data signals to form a custom audience. Agencies need data that can be customized to meet requirements. Bespoke segmentation requires transparency, collaboration and open dialogue between agencies and data vendors.

Furthermore, data-sharing relationships, through which agencies seek out second-party data sources, can help to build bespoke segmentation transparently and directly from the source.

Next story loading loading..