Although speech recognition technology has been around for decades, it’s finally coming into more mainstream usage.
Apple’s Siri helped popularize early mass iterations of speech recognition by getting millions of iPhone owners to start to ask Siri questions, even though the voice assistant didn’t always totally get what was being asked or in other cases, provide the correct answer.
However, improvements have been constant, with technology more fully understanding what is being said, as any user of Google Assistant or the mobile app Hound can attest.
This relative perfection has been a long time coming, with the first interactive voice recognition systems launching in the mid-1990s’.
A new forecast from Tractica projects that despite some barriers, the speech recognition market is on track for significant annual growth.
Voice and speech recognition systems software is projected to grow from $1.1 billion in 2017 to $6.9 billion in 2025, for an annual growth rate of 30%, according to the forecast.
A key challenge remains. Machines have a tough time understanding the context of the words from humans. It can be difficult to understand the difference between what was said and what was meant, since computers tend to me more literal than people.
Understanding requires the understanding of tone, implied reference, location and history, among other things, notes the Tractica report.
Meanwhile, voice technology continues to improve, as it learns based on consumer interactions, which now are in the millions thanks to digital home assistants like Amazon Alexa and Google Home.