Commentary

Rentrak's Rubik: Q&A With Caroline Horner

Caroline Horner was an early pioneer in set-top-box data through her digital experience and leadership position at Dish. Now, as senior vice president/product innovation at Rentrak, she is helping to create a more versatile data toolbox so the advertising and entertainment industries can discover new value.

Here’s an excerpt from my interview with her, available in full here.

Charlene Weisler: What are you currently working on at Rentrak?

Caroline Horner: I am building the data products I have always wanted as a client. When you are working in advertising, you are often working with very awkward data; everyone has a question from their clients that they need to answer.

I now have the opportunity to… build a system that reports all the data that we want to report. What is real reach? What is real frequency?  What behavior… is absolute evidence of [people’s] interaction with content?

advertisement

advertisement

CW: Rentrak announced a new data initiative called Rubik. Can you tell us about it?

CH: Sure. Rubik is a household-level data set that we pulled from our massive data set. It’s about a half million homes and we are providing an environment where clients tunnel into our server, and they can do the analytics at the household level to analyze what people are doing….

We heard that our clients wanted more flexibility [and] to do custom analytics on the fly, and so we made a dataset available. We have over 10 network clients using the product right now. We have attached 4000 to 5000 segmentations on it – Boolean segmentation – the ability to cross different segments together.

So you can target those consumers [whether they are] Jeep owners or cat owners. The other side of it is the ability to do dynamic targeting, creating a user group or a viewer group that perhaps didn’t get exposed to any Jeep ads — so how do you replan a campaign for them?

CW: What trends are you seeing in how consumers are using the new technology?

CH: It is dramatic. When I started, we were in the 20%-30% DVR penetration. Now you see a lot of time-shifting that helps consumers see a lot more content. Even more than that, is on-demand, where you don’t have to remember to make a selection to record a show. At the same time, there is more DVR capacity, where people can store more of what they want to watch over time.

So we are seeing a lot of pull away from the live component — although that is still very important. Free on-demand has been tremendous. People have been sampling and, when they like what they see, place it on their DVR for future viewing. More studying will be done on DVRs – what people choose to record, how long they keep it and how long before they actually view it. Do they need to keep up or will they binge? You can do homemade bingeing.

Digital is also starting to have some impact on viewing on devices. What I see is that live streaming is very similar to live television. But on-demand is almost exactly the same on the set-top box as it is in the digital environment.

The future could bring curated channels where social meets television. We are not there yet, but the future could bring groups together who recommend content to each other.

CW: All TVs being sold today are connected TVs. How will that impact data, measurement and consumer usage?

CH: Data – it’s getting a little scary because there is such fragmentation and there has been an opinion to hold back data….I think it really hurts the industry for folks to hide what is going on. I know the instinct is to protect competitive information, but it is hurting the ability of people to understand the value of that inventory.

I think transparency is important. If you hold on to [data], it can’t be transparent and people don’t know how to judge comparative value. The word “fraud” is ugly, but it is hard to have a benchmark that is the same if folks are hiding data.

So I think it is important for measurement to be brought together – edited the same way, MRC-accredited. We all have to agree on what is currency.  There will continue to be experimentation with consumer usage, and I think it will amplify very quickly.

There have been some breaches in the programmer deals. They are beginning to sell content into these OTT systems and the permissions are there – in the Sling TVs and the Apple TVs. Right now there is not a lot of choice. There are only a handful of viewer carriers. Something like Apple TV can become a real threat to the environment.

8 comments about "Rentrak's Rubik: Q&A With Caroline Horner".
Check to receive email when comments are posted.
  1. Ed Papazian from Media Dynamics Inc, September 9, 2015 at 8:02 a.m.

    While there is much that is positive in the Rentrak service one question I would like to have seen posed, Charlene, even if it is very difficult to answer, is how do you know who in the household is doing what when all you have regarding TV audience behaviour is set usage, not viewing data. This is the major obstacle regarding "big data" applications for TV audience targeting  and it is not a triffling matter. You get quite different results when you focus on people's TV viewing as opposed to overall household set usage.

  2. Charlene Weisler from Writer, Media Consultant: WeislerMedia.blogspot.com, September 9, 2015 at 9:12 a.m.

    Hi Ed, Good point. I don't want to speak on behalf of Rentrak but in my experience even measuring viewing behavior has its  challenges - between constant button pushing that at some point may need to be ascribed, to the ranges of attention devoid of multi-tasking or leaving the room while the set is still tuned on. Nothing is perfect. Some algorithms consider channel change as an indicator of viewing. But that might vary by data vendor.

  3. Ed Papazian from Media Dynamics Inc, September 9, 2015 at 9:51 a.m.

    Granted, Charlene. As you probably know, I am a frequent critic of TV "viewing" surveys however the differences between set usage data and those based on the likliehood than a given person in the household is viewing when the set is on are very great---especially in younger homes with children and/or teens. One way or another----and possibly via a deal with Nielsen---Rentrak will have to come up with people ratings if it is to gain real traction. Just my opinion, of course.

  4. Charlene Weisler from Writer, Media Consultant: WeislerMedia.blogspot.com, September 9, 2015 at 11:54 a.m.

    Yes. People ratings is always the preferred metric rather than household ascription but there are now more and more agencies comfortable with household ascription, saying that the sample (or footprint) size of STB data makes it extremely valuable even if it is HH ascribed. I suppose there is no perfect system currently available.

  5. Ed Papazian from Media Dynamics Inc, September 9, 2015 at 1:24 p.m.

    Charlene, the problem is how do you know whether the person who makes the buying decision and whom the ads are directed at is the one that is watching a given TV show. It's hard to imagine how any ad agency can be OK with household ascription methods when, at a later date, the resulting indices will be projected against Nielsen's people ratings to make a TV buy. Putting it another way, let's say that frequent paper towel buyers--- hypothetically, women aged 25-55--- are the real target of an advertiser's campaign and a "big data" set usage study indicates that one show idexes much higher than another in targeting  households with such residents. Are we to assume that this is also true of the women living in such homes? That simply is not the case in sufficient numbers of occasions to call the entire procedure into question.

    I'm not against ascription, per see. In fact it is sometimes necessary as you cant encumber a respondent or panel member with too much to do. What I am questionning is the idea that you can use basically incompitable sets of data interchangeably, as if any variation from what is real is going to be minor. Advertisers and agencies learned this many years ago when they abandoned household ratings in favor of viewer ratings. Has everyone forgotten this basic lesson? 

  6. Charlene Weisler from Writer, Media Consultant: WeislerMedia.blogspot.com, September 9, 2015 at 4:53 p.m.

    Hi Ed, Caroline responded to me. She said - The point of household segmentations is to align more closely to purchaser behavior, or predictors of purchase behavior.The data is not perfect, but when you show a lift (controlled tests) of sales in the households that were targeted with household targeted advertising, it is fairly persuasive. Innovative media researchers will define the usefulness of new data sources, alongside the old ones. Let the better results prevail.

  7. John Grono from GAP Research, September 9, 2015 at 11:10 p.m.

    This is something we have wrestled with in Australia and come up with various models and guidelines.

    We have Single Source services (Morgan) where the one person completes all sorts of surveys so we get the cross-media duplication.   Of course not everyone answers all questions so there is a varying reliance on ascription.   Plus, the weighted results often don't resemble the accepted 'currency' for a medium - which is often conducted using different research methodologies (e.g. PeopleMeters vs diaries) - so some form of correction factor is used.

    We also have data fusion services where respondents are asked about product usage, ownership, intention, awareness etc. along with a battery of media usage questions (but rarely at the programme, station or publication level).   Fusion techniques are then used to 'donate' (say) the TV ratings onto the consumer data.   The fusion process ensures that both data sets match at the age/gender/geography level.   In essence the usage or intention data for ratings is then a probablistic meausre (which is fine - people just need to know what it is).

    The problem then becomes that the data is analysed at a low level - one could say it is totured into submission!   I've seen strategy documents for clients, and pitch documents from media companies promising all sorts of financial savings based on relative indices.   For example, it may be Ed's paper towl buyers.   One media owner will say they index at 103 to the industry average, and therefore deliver 3% more value, which over the clients, delivers $600,000 of savings.   Cough, cough.   This is (in all probability) 'noise' from ascription, fusion, weighting, sample design, and respondent error, along with natural variation ... oh ... and the fact that some products do have a differentiation.

    While there are no hard and fast rules, my personal (non-scientific) guide is that any index in the 90-110 range is most likely noise.   Anything in the 75-90 and 110-125 range is showing that there is probably some degree of dispersion and variance among the target audience.   Anything in the <75 or >125 range is in all probablity a reflection of genuine difference in the target audience's media consumptions and is well worth pursuing.

  8. Ed Papazian from Media Dynamics Inc, September 10, 2015 at 6:56 a.m.

    I agree with you, John, about how the indices should be interpreted. If one looks at the data as shown in single source studies like MRI and Simmons, plus another similar study that I, myself, conducted, you will note, as regarding varuous forms of TV and individual programs within genres, that most of the indices fall within the 10-15% range, plus or minus. What's more, if you check from one study to the next, you will see "ups" and "downs" that are very hard to explain---except for the various kinds of errors and biases inherent in such studies. Add ascription to the mix, melding the findings from different methodologies, and you get even less discrimination, or, if you do, and you are using set usage ratings not viewer ratings, this, not the truth, is what is causing the disparity.

    In other words, while there is some value in utilizing household data to try to get a handle on what the residents are doing, this is a rather crude approach, frought with possible errors----not so much in direction, but in magnitude. In most cases the promised wonders of valid and "granular" insights just aren't there to the degree that is claimed.

Next story loading loading..