Commentary

GumGum And IRIS.TV Intro Video Image Recognition In Contextual Targeting To Protect Brands

GumGum plans to announce a partnership this week that brings its Verity contextual analysis engine to the IRIS.TV video marketplace.

Advertisers will begin building personalized messages off context data signals with the ability to understand how and when the campaigns perform, with personalized messages based on content, not the audience. 

While IRIS.TV gets contextual segments from several data providers, including Comscore, Verity analyzes the visuals in the video with computer vision image-recognition technology rather than just scanning the available metadata to glean the subject matter from audio transcriptions.

Without the analysis it is difficult to understand whether the video is safe. “Not everyone realized they would face this headwind of global policies and scrutiny on how they would use audience data,” said GumGum CEO Phil Schraeder.

IRIS.TV helps GumGum “bottle” its contextual technology and put it on the “shelf” to use with a variety of safe videos, such as frames within the video. Schraeder said advertisers also can expect an announcement with JW Players video platform as they look to move into this space.

For the past decade, GumGum has been leveraging content-level computer vision (CV) and natural-language processing (NLP) to protect advertisers and their brands, as well as consumers, from video images such as one of a white supremacist rally with Bob Marley’s Everything’s Gonna Be Alright playing as an audio backdrop.

"What can advertisers do when they can't pick up the symbols and all they have is the metadata?" he said.

IRIS.TV launched its Contextual Video Marketplace in 2018 to simplify the video ecosystem to combine publishers, contextual data partners, ad servers and SSPs to package video content within safe segments for advertisers.

Advertisers will select the categories or content they want to identify in the video. GumGum with work with IRIS.TV to process the content, curate it and create a video segment aligned with the advertisers’ request. Benchmarks will provide guidance.

“We’ve been training our neural networks to act like humans and double-checked by a human, who then defines whether or not the training has been accurate,” he said.  

Schraeder expects audience segments will become a tool of the past, as more brands look to define what it means to remain ethically responsible.

When asked about performance, Schraeder said GumGum is working to produce industry benchmarks. It’s about training neural networks, he said. Proving the concept, in the next few weeks GumGum will publish a white paper done with a major holding company that outlines what he believes are cost efficiencies and benefits.

“We will produce the third-party validated research and benchmarks to prove it,” he said.

Next story loading loading..