Local knowledge and personal experience make online reviews helpful when consumers search for what to buy, where to stay or what to eat.
But the online industry has grappled for years with untrustworthy and fake reviews, requiring companies like Google and Yelp to step forward to share information about their review process and why companies, advertisers and brands should feel comfortable in their efforts to ensure trust and safety.
Yelp users in 2021 contributed more than 19.6 million reviews, but of those 4.3 million were not recommended by the company’s software for hitting one of the criteria such as conflict of interest, fake, less useful, solicited or not reliable. Another 1.1 million were removed for violating Yelp’s policies or by reviewers themselves.
About 1,800 Consumer Alerts were placed on Yelp business pages in 2021, warning consumers about egregious attempts to manipulate ratings and reviews for a specific business.
Yelp, today, released its annual Trust and Safety Report for 2021 that provides insights on how it mitigates misinformation and maintains quality and integrity with help from automated recommendation software, content moderation, and the Consumer Alerts program.
Yelp in 2021 saw users express views on everything from the violent insurrection of the U.S. Capitol to vaccine and mask mandates and incidents of racism and discrimination.
More than 8,900 reviews were removed in relation to vaccine and mask mandates, and 29,300 reviews were removed in relation to incidents of racism and discrimination.
Yelp removes reviews that are not based on a firsthand consumer experience, but appear to be driven by the news or social media. Review bombing incidents led to the removal of more than 70,200 reviews for violating Yelp policies.
Google also relies heavily on reviews. More than 1 billion people turn to Google Maps each month to navigate and explore, making local reviews one of the most helpful sources of information when determining where to visit.
On Wednesday, Google shared details on how it keeps local revenues on Google Maps reliable and keeps abusive and false reviews off the platform.
Machine learning, which can recognize patterns, scans each review before being posted and blocks reviews that violate Google’s policies.
The technology is trained to recognize suspicious accounts, such as when a cluster of Google accounts leave reviews on the same few businesses, or the same business receives several one- or five-star reviews in a short period of time.
The technology looks at the content to determine whether it is offensive or contains off-topic content, and determines whether the account has any history of suspicious behavior and whether it shows any uncharacteristic activity such as recently gaining attention in the news or on social media that would motivate people to leave fraudulent reviews.
Human operators also run quality tests and complete additional training to remove bias from the machine-learning models. Models are trained on all the ways that certain words or phrases are used.
One the review posts, the system continues to
analyze the contributed content and watches for questionable patterns.
The team also works to identify potential abuse risks, which reduces the likelihood of successful abuse attacks, wrote Ian Leader, product lead of user-generated content at Google. “For instance, when there’s an upcoming event with a significant following — such as an election — we implement elevated protections to the places associated with the event and other nearby businesses that people might look for on Maps,” he wrote. “We continue to monitor these places and businesses until the risk of abuse has subsided to support our mission of only publishing authentic and reliable reviews.”