CIMM Study Finds Passive TV Measurement More Accurate Than Active Methods

At a time when the advertising industry is adopting and/or evaluating a wide range of alternative “currencies” for its ad buys, new research suggests that most of the ones relying on some form of active measurement may not be as accurate as many believe.

The research being released by the Advertising Research Foundation’s Coalition for Innovative Media Measurement (CIMM), is among the findings coming out of a study conducted for CIMM by passive media measurement firm HyphaMetrics.

The study, which passively measured the TV viewing of 100 households in the fourth quarter of 2021, validated its findings with telephone coincidental studies to confirm what people in those households were watching.

What it found was that active measurement systems such as Nielsen’s people meters -- which require respondents to repeatedly push buttons to confirm their presence and what they are viewing -- “introduce friction” that over time creates behavioral conditioning, attrition and non-compliance that skews results.

advertisement

advertisement

While a 100-household sample study is not necessarily representative or projectable to the actual viewing universe, the net finding was that passive measurement does a much better job of detecting actual TV viewing than active measurement methods -- by a margin of more than three to one.

While active measurement methods captured only 22% of TV minutes viewed over the course of the study, passive methods detected 78%.

"Our study found consistently high match rates for both learning what households are watching on TV as well as which members of the home were viewing, an indication of this technology's ability to passively measure people in a non-intrusive manner," CIMM Managing Director Jon Watts said in a statement released with the findings.

Watts characterized HyphaMetrics’ “multi-layered persons” approach, which utilizes a combination of WiFi, Bluetooth and infrared signals to detect the presence of a viewer, as having the “potential to facilitate more precise detection of an individual in relation to their media exposure.”

25 comments about "CIMM Study Finds Passive TV Measurement More Accurate Than Active Methods".
Check to receive email when comments are posted.
  1. Ed Papazian from Media Dynamics Inc, June 26, 2023 at 3:33 p.m.

    Am I reading this corrrectly? Is this study saying ---or implying----that when Nielsen's "passive"  people meter tells us that an average TV home resident "watches" 4.5-5 hours of linear and/or streaming TV per day, the actual figure---per an "active" measurement ---should be more like 16-18 hours per day? And that's just an average---the heavy viewers would average 30-40 hours of viewing daily. Is that the basic finding?I hope not and that the press release has not stated the findings correctly as the smart folks who are  involved  wouldn't make such a statement---in my most humble  opinion.

  2. Ed Papazian from Media Dynamics Inc, June 26, 2023 at 4:02 p.m.

    Correction in my last post---I got the words "active" and "passive" reversed---Nielsen is the "active" one.

  3. Jack Wakshlag from Media Strategy, Research & Analytics, June 26, 2023 at 4:07 p.m.

    Good point Ed.  Oops. 

  4. Ed DeNicola from SceneSave, June 26, 2023 at 5:36 p.m.

    According to the PEW Research Center, response rates for telephone surveys continue to decline. In 2018, they were only 6 percent. I'm not sure if it's possible to do an accurate phone coincidental anymore. Nielsen used to use them to check its numbers; however, response rates for telephone surveys were higher. They were at 36% in 1997.

  5. Ed Papazian from Media Dynamics Inc, June 26, 2023 at 6:22 p.m.

    Interestingly, telephone coincidentals were once considered to be "the gold standard" for TV rating research---as late as the early 1990s---but this method was correctly deemed inefficient for the purpose of collecting large amounts of nationally projectable data on an ongoing basis. The CONTAM studies, which were funded by the networks, involved very small samples--I believe about a thousand homes per evening and in most cases explored only the totality of TV set usage---not individual program ratings. These showed that Nielsen was getting overall set usage in prime time about right.

    Assuming that this particular study secured the wholehearted cooperation of a small sample of people--100 homes----the real question is how was the question about viewing---or set usage?---posed and could the respondent provide an accurate answer that allows time "viewed" projections to be made. If the findings show what is suggested in this article---that Nielsen is vastly understating the amount of viewing that is taking place---then I have a problem with such findings. I Hope that the good  folks who are involved will offer some  clarification.

  6. Jack Wakshlag from Media Strategy, Research & Analytics replied, June 26, 2023 at 7:44 p.m.

    Ed, there simply aren't enough hours in the day for this new study to make any sense. If Nielsen only captures 22 percent of actual viewing, that means people are watching almost 20 hours of TV a day. As you noted earlier. That's nonsense. 

  7. Ed Papazian from Media Dynamics Inc, June 27, 2023 at 9:13 a.m.

    What's really strange about this is that Nielsen probbaly overstates "viewing" by about 15% if you generously consider 'viewing" to be presence in the room and deduct only those who absent themselves while "watching" any content. Focus only on commercials ---where there is much more "absentee viewing" ----and the overstatement is closer to 35%.

     Even stranger is the distinction between "active" and "passive". I believe that everyone will grant that Nielsen gets the amount of set usage correct---assuming that all sets are monitored and the sample is representative of the nation as a whole---which seems about right. So this means that the  correct meter part isn't "active" but rather, it's "passive"---requiring no particiption by a panel member. The only "active" action takes place when a Nielsen panel member indicates that he or she is "watching" when a channel is selected--or changed.

    And here's another rub. If they are contending that the "active" part of the Nielsen people meter system is at fault ---and seriously understates the number of "viewers" per set during an average  correctly measured set usage minute  then a typical Nielsen finding that there are only 1.2 viewers-per- set in use is what's wrong---there really should be an average of roughly three viewers  per set usage occasion. Which is impossible as an average TV home has only 2.5 residents over the age of two. Is every resident at home and watchbing every time any set is on?  Are there huge numbers of "visitors" also "watching" to bring the figure up to three VPS? Obviously neither of these possibilities is likely---or even close to likely.

  8. Tony Jarvis from Olympic Media Consultancy, June 27, 2023 at 9:55 a.m.

    Perhaps the real question and real story for Joe Mandese is, how could this apparently flawed research be funded and endorsed by CIMM, or is that Beet.TV I can can never tell the difference any more, which is a division of ARF? (FYI: Jon Watts is MD of CIMM and Director, Editorial Events at Beet.TV!)
    Of note, "a persons based presence" is not a measure of Eyes/Ears-On or contact, albeit certainly an OTS (versus the specious claim that "viewable impressions" reflect an OTS) and therefore not a measure of "viewing" per se.  Infrared signals?  Who remembers the Percy meter?

  9. Joe Mandese from MediaPost Inc., June 27, 2023 at 10:02 a.m.

    @Tony Javis: Fact check -- The study was commissioned by former CIMM chief Jane Clarke, not Watts:

    New York, NY (June 8, 2021) – The Coalition for Innovative Media Measurement (CIMM), today announced the launch of an initiative to better understand how time is spent across every platform available on TVs today, including Linear TV, OTT devices, Smart TV apps and video game consoles.

    The ‘Passive TV Measurement Study’ will be conducted in partnership with HyphaMetrics, a leading media metrics technology company that measures individual exposure to content, advertising, and brand integration across every viewing screen and smart home device as a single independent data source.

    CIMM will utilize  HyphaMetrics’ multi-layered methodology, which includes both active and passive methods in combination with  machine learning, to better understand  the unique behaviors of households for more accurate ‘persons in room’ measurement.

    The methodology can be used to assign person-level demographics to machine-level TV exposure datasets, such as those from Smart TV and Set-Top-Box (STB) Data.  While many in the TV industry are moving to the use of scaled granular TV datasets, calibration panels such as those utilized by HyphaMetrics can enable adjustment for data missing in these datasets.

    Using HyphaMetrics’ in-home panel, which captures individualized viewing behavior for an entire household across all media viewing environments, the study will assess the incremental value of accurate persons-level viewing detected using a multi-layer approach that includes Wifi, Bluetooth, Infrared, and machine learning.

    HyphaMetrics will compare its approach to persons-viewing data gathered in a simultaneous phone coincidental survey by a third-party call center. The pilot test will run over the next few months in 100 homes.

    “The industry has long sought to know, in real-time, who is watching what and in what format are they watching,” said Jane Clarke, CEO and Managing Director at CIMM. “With HyphaMetrics, we hope to establish the validity between the meter-detected person presence and the ‘in-the-moment’ source of truth from the phone survey to identify what TVs were on, what was being watched and which household members were in the room with TVs on.”

  10. Tony Jarvis from Olympic Media Consultancy, June 27, 2023 at 10:12 a.m.

    Very helpful Joe, thanks.  However, to the various excellent diverse points raised, including mine, many questions remain that need to be resolved. 

  11. Ed Papazian from Media Dynamics Inc, June 27, 2023 at 11:14 a.m.

    I'm having trouble ascertaining what the methodology being used here is. The HyphaMetrics website isn't very clear on this important point. Also, Joe's response indicates that the plan is to compare the HyphaMetrics  findings with the results of a 100 home telephone coincidental study that, as yet, hasn't taken place.

     Now that will be interesting. Assuming that the telephone portion of the study is conducted with  a tiny but reasonably representative sample---not just "couch potatoes"----does anyone think that the telephone phase will find that the average respondent "watched" 12-15 hours of "TV" per day? If that is the finding, will anyone believe it?

  12. Joe Mandese from MediaPost Inc., June 27, 2023 at 11:29 a.m.

    @Ed Papazian: Here's how HyphaMetrics describes its methodology:

    HyphaMetrics provides Centric Origin Data,
    the industry’s most granular understanding of what the world is watching.

    Our proprietary hardware and software
    powers a panel that provides the only
    definitive measurement of an individual’s
    exposure to any content, advertising,
    branding, or products viewed on any device in the home.

    This data sample is an exclusive preview,
    intended to help the market understand the
    breadth and depth of our data, and the unique
    insights unavailable via other data collection methods.

    The data was collected from Oct 1–Dec, 2021.

  13. Ed Papazian from Media Dynamics Inc, June 27, 2023 at 12:08 p.m.

    Joe, with all due respect, that"s not an explanation of the methodology, it's just fluff. What I'm asking is first,what kind of sample is utilized---ACR-only homes, homes  with ACRs and other kinds of TV sets, etc. how was the sample---or panel ---asembled---a probability sampling?, Something else? But far more important,  how is the "viewing" as well as the other media activities of the respondents----panel members?---determined--by webcams, by some other method of observation?Or is it merely assumed based on device usage and statistical machinations? 

  14. Joe Mandese from MediaPost Inc., June 27, 2023 at 12:14 p.m.

    @Ed Papazian: HyphaMetrics never explicitly references ACR households, just housheholds. I think they're trying capture average household viewing via their passive measurement methods.

    Here's more descritpion of it:

    HyphaMetrics employs a flexible consumer-centric approach that allows individual media behaviors to be captured in their most natural state without having to compel actions. The company uses three methods of person detection technology (Wi-Fi, Bluetooth, and Infrared) to definitively capture exactly who is watching. The system passively detects individuals as they enter, are present, and leave the room by detecting each panelist's Unique Identifier (a Bluetooth or Wi-Fi chip). An active component leverages the TV's remote for a validated backstop.

    And here's more description from the original 2021 CIMM announcment about the study:
    The methodology can be used to assign person-level demographics to machine-level TV exposure datasets, such as those from Smart TV and Set-Top-Box (STB) Data.  While many in the TV industry are moving to the use of scaled granular TV datasets, calibration panels such as those utilized by HyphaMetrics can enable adjustment for data missing in these datasets.

    Using HyphaMetrics’ in-home panel, which captures individualized viewing behavior for an entire household across all media viewing environments, the study will assess the incremental value of accurate persons-level viewing detected using a multi-layer approach that includes Wifi, Bluetooth, Infrared, and machine learning.

    HyphaMetrics will compare its approach to persons-viewing data gathered in a simultaneous phone coincidental survey by a third-party call center. The pilot test will run over the next few months in 100 homes.

  15. Ed Papazian from Media Dynamics Inc, June 27, 2023 at 1:06 p.m.

    Thanks, Joe. That helps. 

    So I gather that they define "viewing"  rather generously as being present when a device is on---presumably this means any device---a smartphone, tablet, laptop, desktop PC, video game console, etc. I also suspect that their press release made a mistake when it refered to "active" methodologies "like Nielsen"---if that's what it said---as it defies credibility to think that Nielsen's people meter understates "viewing" by anything resembling the amount indicated in their chart.

    So what they may be saying is that it's misleading to rely on non-meter surveys which ask people if they watched TV yesterday at a certain time---or how much time they devoted to "TV" per day---as people often forget what they watched ---especially  when the program wasn't all that interesting---or discount times when they only saw portions of shows. There is ample evidence supporting this. For example, The Bureau Of Labor Statistics obtains respondents' estimates of their daily time spent with "TV" on an annual basis and usually comes up with a normative figure of 2.5-3 hours per day---which is way below what Nielsen reports. Other "active surveys" of this nature also seem to produce "understated" levels of viewing.

    However, as I pointed out, Nielsen's national people meter is a mostly "passive" system using meters to derive a base sets-in-use level. Then it simply factors in the viewers-per -set fnformation supplied by its panel members when a channel is selected. With the meters providing an "accurate" measure of TV set activity, it is most unlikely that a telephone coincidental---even with 100% cooperation of a small panel---is going to find that Nielsen's numbers are way too low---aka "wrong".

  16. Tony Jarvis from Olympic Media Consultancy, June 27, 2023 at 2:32 p.m.

    Bottom line?  Jon Watts', MD CIMM, statement on this study referenced "watching" and "viewing" although my understanding is that HyphaMetrics only measures "presence" plus  telephone coincidentals are seriously problematic.  So per Ed Papazian et al, are these "watching/viewing" references and conclusions valid?  And, if not, can we continue to have  confidence in CIMM?

  17. Ed Papazian from Media Dynamics Inc, June 27, 2023 at 3:24 p.m.

    Tony, I have a high regard for the  work CIMM has done in the past so I think that this is merely a case of someone getting a tad over promotional in writing their release. And  it was a mistake to describe Nielsen's people meter as an "active" methodology and to suggest that it vastly understates the extent of TV "viewing"---assuming that's what was written as I haven't seen the release. Like other CIMM work in the past this is another small scale experimental project which will probably confirm the fact that more people are "present" when a TV set is on that we might deduce from poll-like "active' questionning methods that rely on human cooperation---and memories. Fortunately nobody has relied on such "active"methodologies in buying or selling national TV time to advertisers since TV's early days.

  18. Chuck Shuttles from HyphaMetrics, June 27, 2023 at 5:46 p.m.

    It’s clear that Margaret Heffernan was correct in stating “for good ideas and true innovation, you need human interaction, conflict, argument, debate.” This article and topic certainly are generating that…and we/Hypha are grateful to CIMM for supporting this study and its mission “...to promote improvements, best practices and innovations measurement…”.

    I’m happy to clarify the intent of this study, which should help clear up some of this back and forth. We had a very specific research question: is passive persons detection methodology as good as active methods (button-pushing) for persons detection in our meterd homes. Further, this study was designed to validate the accuracy of Hypha’s methodology, not to compare it to the methodologies of other companies’. To that end, Coincidental Calls (asking what is on and who is present in the room of a metered TV at the moment the phone was answered and validating that against what the meter detected in that same timeframe) turns out to be a good fit-for-purpose methodology to this specific objective and found that passive were preferred over active at a 3:1 rate (comparing the passive methods of our own methodology to the active methods of our own methodology).

    As always, CIMM exists to push the media measurement space into the future. And Hypha exists to provide innovative and consumer-centric measurement solutions. We would be happy to set up a meeting and tell you more about our unique, patented, and proprietary approach to measurement. 

    We are grateful that this study highlights the effectiveness of our methods and would love to share them with you in greater detail so that you can apply that understanding to your historical knowledge. 

  19. Ed Papazian from Media Dynamics Inc, June 27, 2023 at 6:54 p.m.

    Chuck, thanks for your clarification. One question if I may. When you compared button pushing---which I take to be like Nielsen's people meter method---at least at the start of each channel selection---with what you found using your people sensing method, did you find that the button pressing produced only a third of the persons you considered to be "viewing" TV across all time periods or dayparts? In other words, if only one person pressed the button indicating that he/she was "watching" did you find three people to be actually in the room at that time?

  20. Chuck Shuttles from HyphaMetrics replied, June 28, 2023 at 3:33 p.m.

    I'm always open to questions and have read your comments on methodology for years, so I am happy to answer. The Coincidental Call established "the source of truth" of who is "actually in the room at that time" with the metered TV. We then validated who the meter has detected as being in the room against the Coincidental Call data. By having a multilayered approach of Passive persons detection with an Active (button pushing) method, this analysis found that Passive were preferred over Active at a 3:1 rate. In other words, most panelists perfer the passive methods (low burdent), but there are some that don't carry detectable mobile devices or provided beacons and prefer to button push (flexible to personal preference to maximize study compliance).

    I belive we're attempting to set up a briefing with you and Tony together to walk you through this further.

  21. Ed Papazian from Media Dynamics Inc, June 29, 2023 at 9:05 a.m.

    Thanks, again, Chuck. I have no problm at all in accepting the finding that respondents prefer the "passive" approach as opposed to having to do anything---such as button pressing. In fact I'm surprised that the  finding wasn't ten-to one---in favor of "passive" over "active".

    I think that we have been laboring under the impression that what you found was that the "passive" method produces three times the viewing levels of the "active" and that this is a more "accurate" determination. That's what the MP article appeared to be saying---I assume based on what was in the press release. I think that Tony will agree that we have been questionning a finding that you didn't make regarding Nielsen's people meters. Anyway it's good to clear up any confusion this may have caused.

  22. Joshua Chasin from VideoAmp replied, June 29, 2023 at 11:55 a.m.

    The way I understand the use of the coincidental here, the phone calls were made it to the actual household under measurement. In such a case, response rate isn't an issue because you aren't generalizing from the sample; you are simply validating the behavior the technology in the HH is capturing. 

  23. Joshua Chasin from VideoAmp, June 29, 2023 at 12:09 p.m.

    My understanding-- having worked with Hypha and being at the CIMM meetings where this work was proposed-- was that Hypha's pilot panel provided an opportunity to do some learning as regards meter/persons measurement. Work done by the BBM in Canada by Pat Pellegrini, comparing Nielsen (active) to Arbitron PPM (passive collection) demonstrated the impact of button-pushing fatigue, so these findings ought not surprise the media researcher.


    I don't think this work is especially controversial. The fundamental benefit of a telephone coincidental, and the reason it was long held to be a gold standard, was that it enables you to ask, "what are you doing RIGHT NOW?" No recall effects. It is "coincidental," because it coincides with the data collection methods you are evaluating. We assume that if the respondent says "X is on the TV and Y people are in front of it," then we may take that as true. Now we can compare the readings of persons presence reported back AT THE SAME TIME from active versus passive collection, against the coincidental (same HHs, same time) as truth set. If I tell you that right now my wife, daughter and I are watching Ted Lasso, and passive collection puts 3 of us in the room, and the people meter puts 2 of us there, that tells you something. 

  24. Tony Jarvis from Olympic Media Consultancy, June 29, 2023 at 12:11 p.m.

    If the technolgy measures "presence" in proximity to a device/screen and the coincidental asks and measures "actual viewing", as implied in the article, I don't beleive that is full validation?  As I noted in the dialogue on LinkedIn:
    From a media research perspective, there are significant differences in value to advertised brands and to programmers between the measurement (aside from the technology whether passive or active) of: content rendered screen counts or set tuning; presence of persons 'near' the tuned device surface; presence of persons within the view-ability/audible zone; persons with Eyes/Ears-On, e.g., TVision or OOH Audience Measurement of "contacts" via various JICs; persons with attention, e.g., Lumen Research Ltd or Adelaide Metrics. Trust I have that right?

  25. Ed Papazian from Media Dynamics Inc, June 29, 2023 at 1:17 p.m.

    Josh, we need to remember that the telephone coincidential method--aside from its current  very low cooperation levels is not really a gold standard and never was. In olden times all that was asked was was the set just on and were you---or others---- "watching" . There was no attempt to distinguish between program content and commercials---it was all about the program content. However these days we have become much more specific. Now we are trying to determine who"watched" the commercials. So if you were to call a person at, say, 8:45PM tonight, and somehow got through and the respondent knew which TV set in his/her residence might have been on and what show was being "viewed" at exactly 8:45PM, and a commercial happened to be playing on the TV screen, chances are that the respondent would think that you meant the program---not the commercials---- and tell you who was "viewing". Needless to say, that would produce a huge overstatement of "audience".

    Suppose that you tried to deal with this possibility by reminding the respondent about what was on the TV screen excatly at 8:45PM and it was a commercial for a fast food chain. If you "helped" the respondent by telling him/her that, then asked if the ad message was "watched" it's highly likely that many respondents would say that this was not the case when, in fact, perhaps 35% of them did look at the commercial---after all, how many people will admit that they watched a commercial---it's not "cool" to do so. So right away, you have introduced a subtle form of potential response bias that can only be dealt with by asking the respondent who claimed to be "watching" at exactly 8:45PM to describe the commercial and it's basic sales message. At which point you have entered the Twilight Zone as far as the practibility of this kind of research is concerned.

Next story loading loading..