Adobe on Monday unveiled technology in Adobe Analytics that unearths insights without the user asking. John Bates, director of product management at Adobe Analytics, says the product focuses on uncovering “unknown unknowns” -- the ability to collect, analyze and understand the data based on triggers users may not ask for.
Three years in the making, the virtual analyst -- a machine-learning tool built on a variety of Adobe products like Anomaly Detection and Contribution Analysis -- integrates into Adobe Sensei, the company’s AI and machine-learning framework. It surfaces signals that would have otherwise gone unnoticed.
In a first for Adobe, the platform collects the data on behalf of brands and applies machine learning to better understand
preferences and tastes of individuals who log into Adobe Analytics, before combining it with insights that the system recommends. It continually analyzes the trending data based on similarities,
identifies meta or macro events and eventually adjusts based on machine learning.
advertisement
advertisement
Some of the features that users can expect to see in the future will focus on the platform suggesting next actions, actions to take as a result of the insights such as email remarketing or targeting campaigns. “This is a stepping stone for a much broader vision,” he says.
The data collected by Adobe on behalf of its customers increase at a double digit rate, year over year, he says. They only use a sliver. “We collect hundreds of billions of data points, but the portion being accessed by marketers and analysts across the enterprise is between 1% and 3%,” he says.
In addition to identifying unknown knowns, the technology prioritizes data based on business and user context without the user having to prompt the system. It will identify nuances in historical data for analysts. The platform also uses AI to identify and understand the context.
Adaptive learning helps the platform understand the data important to the user and provide alerts. It provides a means to “like” or “not like” recommendations, which reinforces the machine learning model and make s the virtual analyst more intelligent.
“We collect hundreds of billions of data points, but the portion being accessed by marketers and analysts across the enterprise is between 1% and 3%,” Could a reason for this be that the mounds of data consist mostly of relatively insignificant information at any given time?
Henry, I asked John the same question about the significance of the data. No data is insignificant. It's really how you use it and the ability of your technology to process it. Do you remember when CPG companies began using RFID chips in their physical inventory supply chain to track the delivery of goods from manufacturing to warehouses? Walmart and target started the trend. Those chips collected millions of data points, but the companies only needed specific ones at any given time. They might have used 200 data points for some deliveries and 100 for another. It depended on location, type of product and other factors. I believe this is what Adobe is trying to do. Collect all the data and only use the data points relevant to that specific campaign. Another campaign might require other data points.
Laurie, thank you for noticing my commenjt, and the quick reply. It would seem we are in agreement here to some degree - what's pertinent is the need of the user at the specific time.
Two words. Occam's Razor.