Predictive segmentation in advertising typically refers to the technique of identifying and bucketing audiences that are likely to share lifestyle and purchase behaviors and, therefore, be similarly responsive to particular marketing messaging or offers.
One of the best-known examples was developed and popularized by Claritas more than 40 years ago, when it introduced geo-demographic segmentation in the U.S. based on household Zip codes. That tool was based on the notion that folks who lived in similar neighborhoods tended to share purchasing behaviors – that is, “birds of a feather flock together.”
Not only did Claritas back up its segmentation with significant, peer-tested research and empirical data, but its composition methodologies were provided transparently, labeled with cute, memorable names like Beltway Boomers, Heartlanders and New Homesteaders.
Digital advertising learned early to marry direct-marketing techniques like predictive segmentation with media-based ad deliveries and data-driven predictive models, which became big drivers in the industry with the explosion of behavioral targeting based on previous media or purchase behaviors, creating targets like “auto intenders” or “NFL sports fans.” As advertisers and publishers sought to add scale to these offerings, they naturally made the segmentations “extensible” by using lookalike models to find more browsers and users that shared characteristics of the kernel set.
Of course, lookalikes rarely (more likely, never) performed at the same level as those who were targeted purely, but the incremental volume, reach and (most importantly) lower cost made lookalike targeting the mainstay that it is today.
Unfortunately, just as a hammer sees everything as a nail to be struck, so too have way, way too many marketers and buyers. Purity of target, precision and efficacy have long since given way to volume, match rates and efficiency. The true lookalike links between most segment members have become tenuous at best and most likely specious.
Everyone is now a lookalike to almost all lookalikes. Transparency into composition? Forget about. Science behind the methodology? No one has the time. Efficacy? Long ago lost out to efficiency.
It’s time to rethink -- and restart -- how our industry uses lookalike modeling. What do you think?