Former Women Googlers, Neuroscience Expert Build Prizma Video Engine

Attracting viewers to a site is one thing. Keeping them is another. A trio of women built an engagement platform, Prizma, for a company they call FEM that serves videos from network partners, signing deals with Discovery Digital, IGN, and Reuters while in beta. Deals also are in the works with Walt Disney Co's ABC and ESPNW.

FEM's founders -- Rachel Payne, CEO; Natasha Mohanty, CTO; and Meghana Bhatt -- are serial entrepreneurs. Payne's and Mohanty's experience while at Google taught them how to categorize and index at scale. Along with Bhatt, the trio developed a distribution and analysis platform combining context with content that analyzes the data to personalize the experience. It also considers the source from which the site traffic originated, such as Facebook, AOL or Discovery.

"In the moment matters a lot, sometimes 50% or 60%, depending on the viewer," Payne says. "What they do in the moment has a big impact on the type of content they are most receptive."

FEM has a combination of six patents -- granted and pending -- from techniques to recommending media, to character based media analytics. While FEM developed the technology for its own use, the company has licensed one patent to an unnamed technology company.   

In fact, FEM's entire backend engineering team came from Google. The engine's architecture is based on search, per Payne. Since launching in beta this summer, the platform processed more than 500,000 videos, making more than 1 billion recommendations for other content. The technology considers how people respond to content, machine learning algorithms to process the recommendations, and validation to confirm what's served.

The technology can process about 100,000 videos in a matter of hours, Payne says. Any Web site or app can embed the player with a application program interface or JavaScript widget.

Signals link to determine recommendations. One signal analyzes the emotional state of the viewer after watching the video. It connects to other signals such as the motivation for watching the video and who and what are in the video, such as a professional sports team or location. Metadata is added to the video and matched to context tied to why the viewer might read the article or view the photograph. 

Next story loading loading..