Voice may not be the end-game when designing experiences for Amazon’s Alexa and other platforms.
That was one of the takeaways from a panel discussion at Boston Innovators Group’s Voice Computing event held in Boston this week.
The event included exhibits and presentations from voice-focused startups and a panel discussion among executives from InterContinental Hotels Group, Amazon and Mobiquity about using voice for brand experiences.
The majority of conversations centered around using Amazon’s Alexa platform through Echo devices and the just-launched Echo Show, which was described by the panel as essentially just an Echo with a screen.
One topic discussed was the limitation of voice as an interface. For example, in the scenario of presenting a hotel guest with a room service menu, that content would take a long time to be read out line-by-line through a digital voice assistant, according to Chris Lamb, manager of mobile products at InterContinental Hotels Group.
As a result, Lamb said the solution would be to have an accompanying display to show the menu and then use voice for the actual ordering process.
This sentiment appears to be shared by at least one agency executive as well.
For example, a group of agency executives recently discussed a focus of using voice for brand experiences at the annual MediaPost IoT Marketing Forum, as the AI & IoT Daily reported at the time (Agency Execs Focus On Voice For Brand Engagement).
While most of that discussion centered around voice being able to communicate more information more efficiently than touch, the group of executives talked about certain use cases that would not lend themselves well to using voice.
The item-by-item nature of voice communication can make a process with multiple options become very long, according to Michael Miraflor, senior vice president and global head of Futures and Innovation at Blue 449.
"If I want to buy a plane ticket, I wouldn't want to buy it with voice, that would take 30 minutes," Miraflor said at the time.
Furthermore, Miraflor said that using voice for search in general then relies more on the artificial intelligence and machine learning side to deliver the right single result instead of multiple options.
“It’s not the same as a search result where you’re going to get five or 10 organic results and chose from that,” Miraflor said. “The system will have to know exactly what you want because you’re not going to want to have a conversation to pick from 10 different products.”
At the Boston event, Joel Evans, co-founder of Mobiquity, showed a demo of an experience that is driven by voice interaction, but supplemented by a connected visual experience on desktop.
In the experience, which is being rolled out for Nestle, a consumer asks an Alexa-enabled device in his or her home about meal recipes while viewing the accompanying website on a desktop. As the conversation progresses and Alexa presents options in pairs, the website shows what they look like. Alexa then asks if the user wants to make option 1 or option 2 (labeled on-screen) and the experience continues in that fashion.
The idea behind the experience is that voice is not necessarily the complete end-game when creating for Alexa, according to Evans. "It's voice first, but it's not voice only," he said.
On the complete opposite side of that spectrum was a presentation from a company called Mylestone, which uses a combination of artificial intelligence to analyze groups of photos and human copywriters to create stories about them, which are then read back out through an Alexa-enabled device.
The idea is to analyze photos from a user’s life and create documented stories that serve as memories preserved digitally, according to Dave Balter, CEO of Mylestone. "Eventually, every one of us is going to be preserved as some form of artificial intelligence," Balter said.
Although the system currently uses humans to write the stories, Balter said that part of the process can ultimately shift to artificial intelligence.