Data is the most powerful tool available to marketers today, and there’s never been more of it. But when it’s inaccurate, it can quickly become the biggest threat to success.
This has never been truer than with streaming and CTV advertising, stresses Matt Hultgren, vice president of analytics at performance at TV agency Marketing Architects.
“Horror stories” about campaigns producing significantly different results than originally thought — leaving marketers months behind in reaching strategic goals, with depleted testing budgets — are all too common, he says.
Advanced TV Insider asked Hultgren to elaborate on the dangers and how to avoid them.
Why is streaming data simultaneously a boon to advertisers, and a potential pitfall?
Hultgren: Streaming offers a new set of data capabilities to TV advertisers. While traditional broadcast television is notoriously difficult to measure, as you know, streaming ads are served on a one-to-one basis using IP addresses. Vendors know exactly who was served each ad and can attribute it like any digital channel.
Brands rightly seek to use this data to unlock key insights for success. But data’s value to advertisers depends on its quality.
While there have been many discussions about the need for higher-quality data for all business decisions, streaming advertisers should be especially vigilant about ensuring the validity of their results as the marketplace continues to mature.
What are the big challenges to streaming data accuracy?
Hultgren: When a consumer isn't on the same internet connection that the streaming ad is served on, attribution becomes difficult. For example, if consumers make purchases outside of their homes or have their WiFi turned off on their devices, the IP addresses won't match.
To tackle these gaps, data companies have developed device graphs that identify IP addresses associated with a household, workplace and friends’ homes. So before you know it, one consumer can be associated with 10, 15 or even 20 IP addresses.
The problem is that different data companies take varying levels of liberty with their device graphs. Depending on the company, results change anywhere from 2x all the way up to 10x. That’s easily the difference between a successful campaign and a struggling one.
In other words, although device graphs address the original challenge of matching IP addresses, they often create an imprecise window of results that deceive brands about their campaign’s true impact.
This is where data can become a trap, instead of an asset.
So how can marketers use streaming data to full advantage without being misled?
Hultgren: After being involved in and watching the outcomes of many streaming campaigns, I’ve boiled it down to four steps.
First, they need to take the time necessary to set their CTV tests up for success. While this may seem simple, not doing this is the number one mistake. Far too often, brands rush tests for the sake of learning. Test, learn, scale. In theory, this is great. But if you don't do the necessary set-up work on the front end, you risk losing all the learnings from the test — and wasting the budget you used to acquire those learnings.
Specifically, before launching a test, they should make sure they’re able to answer these questions: What is the goal of the campaign? What metrics will determine success? What is my measurement plan? Do I have the necessary budget to measure the desired outcome? How long does the test need to be to drive results?
Step two is finding a data-transparent partner.
While there’s no shortage of data when it comes to streaming, and having access to your campaign data is critical, it seems increasingly challenging to access it, thanks to the red tape created by walled gardens, black boxes and vendors that don't share IP address information. Such “partners” force you either to use the streaming provider's attribution solution or pay for a third party with access to the streaming provider's data.
It's far better to find a streaming partner willing to share the data needed to feel confident in your campaign results.
Step three is using multiple models.
When resources are short, it can be a saving grace to find a media vendor that doubles as an attribution partner. But be careful. A vendor's modeling is often more aggressive in attributing credit than third-party measurement companies.
It can be tempting to lean into the overly optimistic self-reported measurement from vendors. But if the results aren’t real, the lack of actual business results will eventually show themselves.
No model is perfect, but third-party measurement is more objective and believable. And if you have the resources, I recommend setting up your own models for additional insight and greater data confidence.
Step four is quantifying incrementality.
One of the biggest "gotchas" in the industry today is measurement platforms taking credit for every single IP match between an ad and a corresponding web or app visit within the home. This fails to account for traffic that would have come from those homes regardless of whether they were exposed to an ad.
To quantify the true incrementality of tests, brands should run test and control groups. By measuring the visit and conversion rates of homes not exposed to an ad, you quantify the baseline level of conversions you would expect to see without the ad, and can subtract that from the actual number of conversions in the homes exposed to the ad. By not accounting for this baseline, you risk severe over-attribution of your campaign, which will lead to misleading insights and unprofitable scaling decisions.