Interactive design for voice search requires marketers to model conversations and categorize topics in a similar way that marketers traditionally categorize visual touch interfaces like typing keywords into a search query box on Google or Bing. These categories for voice search might include commands like turning on the lights, requesting delivery times for a package or adjusting the temperature in a room.
Voice will move design more toward the way something works rather than the way it looks, says John Jones, senior vice president of deign strategy at Fjord, a consultancy at Accenture Interactive.
The first step requires marketers to better understand how the brand wants the question to be understood, answered, and what form the action should take. Then they must understand how consumers want to hear or visualize the results.
"Many times that means using [human] senses other than voice, such as a visual indicator on the webpage," he said.
Fjord conducted internal studies that prove humans speak to voice assistants like Alexa in a very different way than when they speak to another human. Optimizing responses will need to take that into consideration until people become more familiar with talking to machines.
The second step requires marketers to understand that not all virtual assistants respond in a similar way. Making optimization is becoming increasingly complicated, per Jones, as not all virtual assistants respond similarly and tone of voice for responses is important. Microsoft Cortana, for example, allows the developer to select the voice and the responses, but the rules are a little more rigid when using Amazon Alexa and Apple Siri.
It's also important for marketers to know whether their potential and existing customers like to receive information in smaller bites, rather than long-winded responses.
Adding chatbots to the mix makes optimizing a little different. The advantage and disadvantage is that the marketer can work with voice and visual.
The third step requires marketers to think about designing for what Jones calls the "human handoff." Think about how chatbots hand off the query to a human when it becomes incapable of solving the problem. Sometimes the handoff isn't important, because the technology doesn't need to solve a problem.
He also suggests testing the voice technology with the sales group first, so the virtual assistant can learn how to answer questions and the sales team can learn how to train the technology. The voice search will become smarter and more connected to the brand than it would if it just launched straight to customers.