Audioburst's $6.7 million Series A funding and the launch of a new application program interface (API) gives developers access to the world’s largest growing library of audio content -- news and information -- to pull into their apps in real-time. The company is building an audio search engine, scheduled to launch in August, that will work like google.com.
This API, which pulls in content from across the Web and into apps from mobile to voice activated and entertainment systems, builds on what Audioburst calls a "listening identity" for each user. The content is based on the person's search query, interest, reaction, and listening habits.
Advanced Media, a speech recognition technology company in Japan, led the financing round with participation from Flint Capital, 2B-Angels, and Mobileye investors consortium.
Dan Sacher, former SVP at Viacom, has joined Audioburst as its head of content partnerships. Osnat Fainaru Benari -- head of Area 51, AOL's innovation arm -- also joined Audioburst as an advisor.
Here's how it works: Audioburst's engine listens, records and transcribes audio content from radio, podcasts and online videos. It then automatically segments the audio into one- to three-minute segments, each tagged with keywords and phrases that someone might use to search for the content.
By building out this library of audio content, Audioburst can then serve it up in apps as needed. For example, by enabling Audioburst's NewsFeed Skill on Amazon Echo, Alexa can serve up the latest news on Donald Trump. It will search the Audioburst library and serve the latest bursts from a variety of sources, like WLTW-FM.
Audioburst powers a real-time News Feed on Google Home and Amazon Echo, which enables the user to retrieve the latest information on any topic and get live updates from the news outlets, radio stations and podcasts. In the app, it tells the user where the information originated from, such as a radio or television broadcast, or an online media outlet such as The Wall Street Journal.