Google Assistant Most Accurate, Cortana Closing Gap, Per Study

Voice search and personal assistants that can carry out tasks are an important part of a search marketer's campaign strategy, and accuracy plays a key role.

Which personal assistant has the most intelligence needed to carry out tasks and provide accurate information? Stone Temple Consulting's research "Rating the Smarts of the Personal Digital Assistants," released Thursday, analyses all four — Google Assistant, Cortana, Siri, and Alexa — including the capabilities to take an action on the user's behalf such as ordering flowers or booking an airline flight reservation.

The study used Google search for a baseline for questions answered, at 74.3%, and completion and a correct rate of 97.4%. The 100% Complete & Correct rate column requires that the question be answered fully and directly, and that very few of the queries were answered by any of the personal assistants in an overtly incorrect way.

Google Assistant answered 68.1% of the questions with a 90.6% complete and correct rate. Cortana answered 56.5% of the questions with an 81.9% complete and correct rate. Siri answered 21.7% of the questions with a 62.2% complete and correct rate. Alexa answered 20.7% of the questions with an 87% complete and correct rate.

In finer detail, the study examines all four and at the percentage of questions answered verbally. Google Assistant on Google Home provided verbal responses to the most questions, but that does not mean that it provided the most answers, versus the phone-based services Google search, Cortana, and Siri. All have the option of responding on screen only, and per the study, each of them did so several times.

The number of answers to questions that were simply incorrect may be surprising. The margin of error for all for the exception of Siri fell below 1%. Cortana had the least number of answers rated as simply incorrect. In the example provided by the report, Stone Temple Consulting notes that Google search wanted to translate the word "cocona" into "coconut," so it did not provide a correct answer.

Since Siri relies on some answers from Wolfram Alpha, it sometimes digs too deeply into the meaning and delivers a "nonsense response," per the findings.

Overall, Google demonstrated that it is most accurate and intelligent, while Cortana works hard to close the gap. Alexa and Siri both face the limitation of not being able to leverage a full crawl of the Web to supplement their knowledge bases, per the Stone Temple Consulting research.

2 comments about "Google Assistant Most Accurate, Cortana Closing Gap, Per Study".
Check to receive email when comments are posted.
  1. Mark Traphagen from StoneTemple Consulting, April 27, 2017 at 8:45 a.m.

    Thanks for writing about our study. Your readers might appreciate a link to the study so they can review the data and methodology for themselves: https://www.stonetemple.com/digital-personal-assistants-test

  2. Mark Traphagen from StoneTemple Consulting replied, April 27, 2017 at 10:47 a.m.

    Thanks for adding the link!

Next story loading loading..