Is There A Future for Research? Q&A With Jeff Boehme

I have known media veteran and consultant Jeff Boehme from our days at NBC in the 1980s. Since then, he has had a varied and interesting career path.  A veteran of local broadcast rep firms, NBC, ABC, NCC Media, Nielsen, Kantar Media, Rentrak and Comscore, he concentrated on audience evaluations and processes for media currency acceptability.

Charlene Weisler:  What role should data play in media today?

Jeff Boehme: Data always played a critical role in media. Content is now distributed on more types of technology than ever. Virtually all of these digital devices collect usage information and have been enabled in the marketplace by a multitude of companies.

Brand marketers realize the potential of reaching customers with far greater efficiency and effectiveness through addressable advertising across multiple platforms and content.

But defining the benefits of efficiency and effectiveness is not a standardized process; there are real issues surrounding the massive data sets collected from these digital devices and becoming ubiquitous as media currency. Ultimately data can and should be leveraged to maximize the effectiveness of the three basic pillars of brand advertising: creating awareness, reinforcing equity and driving purchases.



Weisler: What types of data are most important? What is currently missing?

Boehme: Over five years ago we understood the remarkable advantages of ‘big data’ expressed as the three Vs: volume, velocity, and variety. The sheer scale of anonymous, passively collected user information provides much more statistically sound results than traditional small panels and surveys.

However, most every big data set is incomplete and may not include essential data elements required for currency acceptance, making traditional tools still necessary to supply missing data points. I would add there should be a few more Vs to consider – the validation of the data (how accurate it is) and the ultimate V – its value. The value of the data ultimately answers the questions posed by the brand and can be accepted as currency on all sides of the ecosystem with confidence.

The good news is we now have more data than ever before. The bad news is that there are significant inconsistencies with the sources, collection techniques, methodology, standards, transparency and importantly – conclusions. All major cable MSOs are offering their tuning data to a variety of companies, as are virtually all connected TV (CTV) manufacturers. I have seen significant disparities on results depending on whom and how a company processes, manages, applies statistical corrections and matches census segments.

Weisler: Should age and gender still form the basis of currency?

Boehme: It really wasn’t until 1987 when Nielsen launched their people meter service that age/gender metrics became the de facto currency. However, many brand marketers learned that age/gender weren’t enough to efficiently plan or buy media – specifically for high spending categories such as automobiles. Most consumer purchaser data sets available today are household-specific and include information more relevant than just age/gender. Knowing that a household has pending lease expiration for a BMW is more valuable than simply counting adults 25-54.

Weisler: What do you think of the general state of attribution?

Boehme: Channeling Sergio Leone’s epic masterpiece western film “The Good, the Bad and the Ugly”:  The good is, we now have a plethora of consumer-based intelligence and media companies able to use attribution techniques to see a finer view of the customers’ behavior across screens and determine what components of media campaigns work (or don’t).

The bad is the complexity of data, multiple data sources, missing data points/deprecation and differing methodologies.

The ugly is, there doesn’t appear to any consistent standards – resulting in significant outcome discrepancies.

Last year, CIMM completed a study on attribution which found the inconsistency of key television attribution inputs, not technology, is the main cause of variance in outcome measurements. They compared eleven different providers and determined, “more stringent media measurement standards are required to ensure attribution results that are consistent and comparable from provider to provider, with exposure data, more than occurrence data having the biggest impact on outcome results.” 

I agree with their findings and with their report’s other recommendation requiring additional standardization, such as commercial IDs similar to Ad-ID, for identifying ad occurrences and in defining exposure and reach.

Weisler: What do you think is the most important issue facing research at this time?

Boehme: Most research groups are a cost entry on a ledger, requiring investment without a direct responsibility for cash flow. Many successful researchers have learned to move quickly, adopt better data skill sets and provide actionable input into a sales process and discover how their company can be more profitable.

Many companies see data scientists as a replacement for the research process, but smart companies see the value of both, with complementary skill sets and valuable disciplines. The simplest distinction may be that the data scientist determines what could be accomplished with data, and the researcher helps define what should be done with the data.

5 comments about "Is There A Future for Research? Q&A With Jeff Boehme".
Check to receive email when comments are posted.
  1. Ed Papazian from Media Dynamics Inc, July 16, 2021 at 10:28 a.m.

    Good interview, Charlene and I agree with Jeff's comments regarding comparability of data and other issues. One point, however. Age/sex audience "currencies" and guarantees were in use for national and local TV time buying long before the introduction of the people meter system by Nielsen in 1987. Before that, household diary viewer-per-set findings were merged with meter based set usage measurements to produce the required numbers---which routinely appeared in most Nielsen national rating reports on a regular basis. Locally, many large markets had similar systems---though with very small panels while mid-sized and smaller markets had to make do with diary-only measurements. 

  2. Gerard Broussard from Pre-Meditated Media, LLC, July 16, 2021 at 12:51 p.m.

    Great interview, Charlene.  Jeff, thanks for capturing the metamorphasis of TV research and metrics over the years so well.  We definitely need standards to bring all the disparate big data pieces together.  A shout out to Jane Clarke and CIMM for playing a key role in standardization efforts to date.  

  3. Jeff Boehme from Advanced Media Consultants, July 16, 2021 at 1:29 p.m.

    Hi Ed,
    Good to ‘see’ you again, at least in print.
    Yes, you are correct – and I apologize for not being clear.  Prior to the introduction of the people meters, age/gender metrics calculated with the meter/diary integration process (where meters existed) did provide the currency basis for TV negotiations, both nationally and locally. As you inferred, most local markets were relegated to diary only measurement. In my earlier days with NBC, the broadcast local reps firms and spot cable, I do remember using solely household ratings & impressions, often with an age/gender inclusion as ‘Households with’ a particular age/gender factor when traditional measurement couldn’t report specific demo audiences.

    Thanks as always for your sage commentary…

  4. Jeff Boehme from Advanced Media Consultants replied, July 16, 2021 at 1:40 p.m.

    Thanks - I also applaud Jane and all at CIMM working hard to help move the industry forward, but also George Ivie and the MRC
    Audience/consumer measurement is not easy, and two groups in particular have been working diligently to encourage positive and necessary movement in measurement. CIMM (Coalition for Innovative Media Measurement) was formed in 2009 with the goal to bring greater transparency and confidence to new forms of cross-platform measurement of TV/premium video. From the standards perspective, The Media Rating Council (MRC) tries to secure measurement services that are valid, reliable and effective. They also evolve and determine minimum disclosure and ethical criteria as well as provide an audit system as services are developed.

  5. John Grono from GAP Research, July 16, 2021 at 9:13 p.m.

    A great article Charlene and Jeff.

    May I be so bold to add a fourth V.   V for Veracity.

    There is a tsunami of data that many are so eager to analyse that they don't cross-check (internally and externally) what those data sources REALLY mean rather than what THEY THINK it means.   The data is often fed into ML or AI software which analyses the data in a plethora of ways and at lightning speed.   What used to take a week takes minutes now.

    But because the results are produced by software it is assumed correct.   But far too often the veracity of the input data is not tested.

    Here's one simple example.  A lot of research (F2F, CATI, online) is constrained by mandated respondent limitations - generally that they have to be aged 14 or older.   (There are some exemptions for small-scale research where a parent or guardian is present).   There are similar limitations online with age consent filters but how valid is that unchecked response often resulting in children inflating their age in order to qualify using the latest age-restricted app.   On the flip side correctly validated data (i.e. confirmed 14+) is often used as an analytical basis for unfiltered traffic and usage (i.e,. no age restriction), which inflates the quantitative results.

Next story loading loading..