Commentary

Why De-Duplication Is Becoming The New Reach & Frequency

As a journalist who began covering the ad biz in the quaint age of traditional media, one of the first things I learned about media planning and buying was that it was all about reach -- and at least in those days, optimal frequency.

So when I began covering digital, I would ask advertisers and agencies how they planned and bought digital, sometimes direct or sometimes programmatically. And I was told they were buying “uniques.”

So I asked them, how do you know they’re unique? I was being flip, but I was also trying to make a point -- because unlike a traditional medium like TV, where there was an explicit Nielsen “universe estimate,” and the allocation of GRPs (gross rating points) could be used to calculate reach and frequency, there was nothing like that in digital.

Advertisers and agencies were simply buying the monthly uniques of individual publishers, ad networks, etc., but they didn’t actually know how unique -- or incremental -- that audience reach actually was.

That, of course, led to controlled frequency of digital ad exposure, wear-out and frustration from consumers and marketers alike. But is also made it difficult, if not impossible, for digital ad buyers to calculate the true ROI of audience reach.

Over time, digital media created proxies by creating relatively unique identifiers -- browser cookies, device IDs, etc. -- and a slew of models in an effort to fix the problem, which needless to say led to another problem: consumer privacy and data sovereignty that has come home to roost.

So it will be interesting to see how Nielsen’s “One” platform will address that -- at least for its version of an integrated cross-platform TV/video measurement service -- which I assume will have universe estimates, as well as reach and frequency curves.

But recently, I’ve been seeing a number of creative data approaches designed to “de-duplicate” audience reach in a way that seems reminiscent of the advertising and media marketplace I first began covering in the early 1980s.

One was unveiled last week by Interpublic’s Kinesso unit, the data and technology division that was spawned following Interpublic’s acquisition of Big Data giant Acxiom.

While any big consumer data platform worth its salt tries to de-duplicate ad exposure and reach, Kinesso has developed an “intelligent identity” method that seems like a pragmatic, as well as a media neutral solution.

Dubbed Kii (pronounced “key”), the solution licenses data from a cross-section of the best audience identity partners, utilizing a “cascade syndication” model that effectively de-duplicates their unique reach as it ingests the data into its system.

Even better, Kinesso’s Kii utilizes a “pre-match” method that ensures it only acquires -- and pays for -- unique new identities that are added into its database, ensuring that its also cost-effective and cuts out any fat and redundancy.

Although officially unveiled only last week, Kii has already been operational with big Interpublic clients, and Kinesso’s team estimates its already generating as much as 20% improvements in campaign reach delivery.

Another interesting pitch I got about reach and de-duplication wasn’t about media exposure, per se, but it also came from a company named Nielsen -- NielsenIQ the consumer marketing data giant that was spun off from the media measurement version of Nielsen this year.

The new NielsenIQ service, dubbed Omnisales, claims to be the first to enable consumer goods marketers and retailers to attribute -- and de-duplicate -- sales across online, in-store and curbside pick-up transactions.

While it’s not exactly the same as calculating the reach and frequency of media exposure by de-duplicating audiences to calculate incremental reach, NielsenIQ’s Omnisales effectively is doing the same thing for another media channel: physical and digital retail destinations.

And while the company hasn’t actually created reach and frequency modeling for brands and retailers utilizing the platform, it is something the company is looking at, according to Harvey Ma, senior vice president-consumer and retail performance at NielsenIQ.

“Understanding sales and share from this perspective is fundamental to doing business today,” he says, adding, “Consumers’ behavior has irreversibly changed and we will only see exponential growth in the number of those who shop online, in store, or via click-and-collect — and any combination of the three.”

2 comments about "Why De-Duplication Is Becoming The New Reach & Frequency".
Check to receive email when comments are posted.
  1. Ed Papazian from Media Dynamics Inc, October 4, 2021 at 1:04 p.m.

    Joe, typically, most reach curve tables for TV are based on a series of tabulations of actual schedules  which are broken down in small pieces---like daytime, late fringe, broadcast network, cable, etc. then combined in various ways ---the end result being theoretical but reasonably accurate tables that planners can use to approximate the "reach" of various daypart/network type, program type mixes at certain GRP levels. . As a rule, the time buyers have little to do with this ---- the planners specify GRPs by  daypart, network type,etc. and the buyers merely fulfill these goals. For those who have studied such data, estimating the reach of a "linear TV" ad schedule and combining it with a digital video schedule is not a very difficult task---providing you have a good handle on the reach of the digital component. That problem should be solved---to a degree--- when Nielsen provides us with device-based estimates for both types of "TV"  via Nielsen "One".

    The real question concerns attempts to determine what media mixes and frequencies have an effect on "outcomes"---like sales. Which is, in my book, also a media planning---not a buyer's function and one which will not be solved---if a solution is even possible---until we add attentiveness to all of our "audience" measurements. At present, this vital refinement does not seem to be in the cards as the time sellers ---who do most of the funding---are deathly afraid of attentiveness metrics. I hope I'm wrong about this---but without attentiveness we can "deduplicate" all we wish----but get nowhere.

  2. John Grono from GAP Research, October 4, 2021 at 7:40 p.m.

    Great article Joe.   And grest comments Ed.

    I think it goes to show that 'everything old is new again'.   Our maxim was ... maximum reach at minimum frequency (beacuse that would take care of budget concerns).

    In the late '90s when I worked at Clemenger (now part of Omnicom as OMD et. al.) we wrote a TV 'optimiser'.   It relied on having the 'elemental' TV viewing data from Nielsen then OzTAM (i.e. person-by-person, minute-by-minute).   The buyer (or planner) would input key parameters such as market, demo, budget, campaign dates GRP ceilings and floors etc.    It used a variety of models such as gaming theory, complete randomisation etc.   It could also be 'constrained' or 'unconstrained' ... things like mandatory budget allocations to honour SOV deals, daypart restrictions (e.g. alcohol advertising), narrow demos such as Teens 13-17 or wide demos such as  People 18+.   It handled flighting requirements (week-on/week-off etc.), mandatory programmes (e.g. sponsorship).   You could run it on 'most recent' (week/month etc.) or based on sesasonality.

    Once you provided those specifications it would run thousands of 'buys' in a minute or two (back when computers were much slower).   It did however assume that every ad-break was available.   It was trained to stop itself when it couldn't find a better mix.   As it ran you could see a summary and watch how the reach would increase while the client spend reduced.

    All pretty neat stuff.   I think they now call it Artificial Intelligence or Machine Learning.

    But the underlying thing is that your objective needs to be the maximum reach with the lowest possible spend/frequency.   Further, a key learning was to discover the point of inflection on the 'Cost per Reach Curve' - the point where you CAN increase the reach further (generally by a fraction of a decimal point) but it wasn't the best usage of the client's budget that would be better spent on a new burst of activity.

Next story loading loading..