Commentary

Time To Revisit Time-Based Metadata And Its Role In Monetization

I'm glad to see that my article last week on the value of time-based metadata (Dive Deep With Video Metadata For Major Monetization Opportunities) has spurred a spirited debate around one of my favorite subjects. I would like to address some of the questions that came up about time-based metadata, but first wanted to take a step back and provide the lay of the land.

As you know, delivering on the promise of multiplatform video (TV Everywhere being one example) presents many opportunities and challenges. Time-based metadata is one of several foundations that can help capitalize on this opportunity, in addition to things like authentication, content identification and adaptive bit-rate streaming.

Among the indexing technologies currently being used by the television and film studios are facial recognition, scene classification, and ad-break detection. These technologies are really at an early stage in the metadata creation ecosystem. They are primarily used today to help automate the digital workflow when content is going out of the film vaults and into our living rooms. A good example is the ability to automatically strip off the bars and tones from the digital file, which ultimately reduces content owners' operational expenses.

Readers last week asked for more specifics for how time-based metadata could help in certain environments. Examples include but are not limited to search and discovery, analytics, rights and restrictions, and content recommendations. I'll address a couple examples around ad targeting.

advertisement

advertisement

Today, the physical ad slots of a video are detected during the encoding and indexing process. These time codes are marked in our database as an existing physical ad slot that was created during post production. These ad slots are not always in ideal places when attempting to maximize ad revenues.

One area of research right now focuses on a system that could dynamically look at previous video viewing behaviors and calculate an optimum placement based on viewership drop-off or audience engagement. These placements come from a list of other virtual (not to be mistaken with physical) ad slots that were determined during the encoding and indexing process. The number of ad slots would still maintain the appropriate spacing and consistency throughout the video. In other words, you would not be hit by any more ads than you previously were; the ads would be just more relevant.

The big eureka moment here is that if you have the ability to select an ad slot, then doesn't it make sense to find one that has the best association with the ad itself? Enter ad targeting. When the ad slot is called, the metadata around that time code is sent to the ad management platform. The ad platform then attempts to find the most relevant ad that you, as the user, would engage with.

Not that I'm suggesting that we all engage with ads, but you can see that if I'm watching a video that interests me about camping, I might be more willing to sit through a North Face ad rather than another insurance ad.

Recently there's been much discussion about additional ways these tools and technologies could help in content monetization. I'm a big advocate of being able to view content at a reduced cost -- or, in some cases, for free! This has become especially important when you look at the amount of contraction that has occurred in the video publishing industry. In some cases this has been healthy for publishers and content owners to produce better brand awareness, but I'll leave that discussion for another time.

I know I'm getting a little "deep" in my analysis -- but you can start to see potential applications for time-based metadata that can help content owners more effectively monetize their assets. In the meantime, keep the comments coming!

4 comments about "Time To Revisit Time-Based Metadata And Its Role In Monetization".
Check to receive email when comments are posted.
  1. Mike Einstein from the Brothers Einstein, March 23, 2010 at 12:16 p.m.

    Where were you when John Wanamaker needed you?

  2. Damien Somerset from Insolent Robot Corp., March 23, 2010 at 2:35 p.m.

    Time-based metadata is definitely the future. I'm pretty sure YouTube is currently running tests on voice recognition in order to add that information to it's search results.

  3. David Burch from TubeMogul, March 23, 2010 at 3:26 p.m.

    One area of research right now focuses on a system that could dynamically look at previous video viewing behaviors and calculate an optimum placement based on viewership drop-off or audience engagement.

  4. Matthew Shaw from Flimp Media Inc., March 25, 2010 at 11:27 a.m.

    Matt,

    Again, you're talking about using a very powerful tool for something that is becoming increasingly ineffective. You still haven't addressed the primary issue here, which is that temporal meta data is better suited for selling content than it is for selling ad space within the content. Moreover, the selling content model is one that is much more attractive to content producers, which means there is a tremendous opportunity here to change the game -- and you're not talking about it.

Next story loading loading..