Sharethrough Ups Engagement With Dynamic Video Captions

Sharethrough on Tuesday introduced dynamic video captions for video ads to optimize ad recall and increase engagement.

More people are using their mute button when they view videos, reading the captions instead, according to Dan Greenberg, president at Sharethrough. “We see two things driving this behavior,” he said. “People keep their devices on mute all the time and videos default to play on mute, especially on our phones. We turn off the ringer and quiet all sounds that come out of it.”

Even when phones aren’t on mute, videos that play in feeds like Instagram, Facebook, Twitter or any publisher site are designed to always start on mute so it doesn’t interrupt the user experience.

In fact, 75% of mobile users keep their phone muted while a video plays, but standard video lacks context required to drive return on ad spend when users are watching on mute, according to Sharethrough research.

Nearly half of people watch TV with captions on, especially adults under 44. “Our hypothesis is that people are so accustomed to videos playing on mute, especially with the growth of video watching on social sites like Instagram, that they now prefer reading videos while they watch them,” he said. “Even on TV.”

Sharethrough, an independent ad-exchange, built the technology to automatically render the ad by matching it to surrounding content. It generates video captions that can power as much as a 56% increase of message comprehension versus ads without captions, according to the company’s data. The feature also enables more accessibility to deaf-and-hearing impaired, while providing an overall better experience that does not interrupt the user. 

The dynamic captions use Sharethrough’s TrueTemplate technology that reads the fonts, colors, size and more of every site and matches the caption to fit each site in real time. 

Captions are created using a mix of automation and human service. The automation listens to the video file and creates an SRT (SubRip Subtitle) file that maps each line to the second it appears in the video. The words are verified by a human. Sharethrough clients have the option to trigger the captions to load for all or one video.

Since more people mute their videos, other senses replace sound.

How effective is it? Greenberg said: “Since we are so accustomed to videos playing on mute, our sight sense is heightened, which is why videos with captions, or other text, are more likely to catch our attention and, since we’re then reading the video, we’re more likely to comprehend the message of the video.”

Certain caption or written words when sound is muted have a higher recall. Sharethrough conducted a Neuroscience study a few years ago that concluded four groups of words deliver the highest recall: insight, time, space and motion. The company labeled them “context words,” because they make people think about the context of the word, which increases brain function.

 

Next story loading loading..