
Many listening platforms tout automated sentiment and natural language processing as premium features. But they are insufficient means to reach meaningful results. Skilled
analysts must weed through spam and irrelevant posts, score the conversation for sentiment, categorize topics of conversations and translate key findings into actionable insights.
A recent example of a company trying to utilize automated sentiment is
paidContent. Yesterday, it promoted its new index called "Social Standing," which leverages Trendrr's social-measurement technology to create a scoreboard that tracks social sentiment for top entertainment and media companies and brands. Visitors can view
changes in sentiment by day, week and month.
PaidContent is a valuable news source that I respect and read daily, so I was happy to
see it embrace social media analysis and offer this great new feature. But once I started to dig a little deeper and look through the conversations behind the analysis, I was surprised to see the
data. Even the most novice marketer can see that most of the tweets are spam or irrelevant. For example, here is a sample of tweets (see Fox News imagery above) that went into the calculation
for the sentiment score for FOX (as in "Fox" television).
It's clear that the technology is procuring every tweet mentioning "fox" in any language (e.g. Megan Fox appears a couple of times). If the
listening service can't collect relevant posts, how can we trust that the language processing is going to understand sarcasm, cultural references, slang, abbreviations or any other linguistic
nuance? We can't. Technology alone is not sufficient to truly understand social media conversations.
Of course, Trendrr is just one product among many listening technologies available, and these issues aren't exclusive to them - they are industry-wide. While every service
claims to have the most accurate sentiment gauge, the fact is that nothing beats human analysts.
Here are 10 reasons why automated social media listening tools
fail:
1.
Spam posts are counted
The tools have difficulty weeding out all posts created by
bots. The spam is then included in the analysis, thus skewing the results.
2. Keywords have multiple meanings
In the example from paidContent and Trendrr, terms like Megan Fox, the foxtrot, the animal fox, the slang fox, etc. are all blindly being counted as a relevant terms for the
"Fox" network.
3.
You need the human touch to refine your data collection
You may need to redefine your
keyword queries to target the right conversations for your analysis. Filtering posts and readjusting your search is a critical listening process.
4. Context is everything
It's not enough to read just the excerpt or post mentioning a brand. You need to take a step back. See if you can gain any additional insight about the conversation, based on the
person who mentions it, where the post was located (i.e. what site or blog type), if the comment was a response to an earlier thought, etc.
5. ZOMG! Msgs r always EZ 2 read
#JK
Current technology is not sophisticated enough to understand sarcasm, slang, cultural references, abbreviations, phonetics, idioms,
and other linguistics nuances.
6.
A picture is worth a thousand words
Keyword-based tools cannot analyze images, videos
or rich media next to a post, which may completely change the context or tone.
7. Sentiment scoring is L
How do you quantify the terms "like," "hate," "love," "LOVE!!!!,"
"meh," "ugggggggggggggh," "blech," or "L?" In different contexts, these words can
have a completely different meaning.
8.
Key consumer information is lost
Psychographic, technographic and ethnographic
insights about your consumer can be gained by visiting their blog, Twitter page, community, etc. Technology does not take this extra step.
9. True influencers and brand advocates fall through the
cracks
At its most basic, automated tools judge all mentions with equal weight -- whether from a bot, a person or company. More advanced
services might assign different weights based on readership or Twitter followers, but they aren't taking into account true influence or pertinent qualifications of a brand advocate. By reviewing
their other conversations, engagements and digital activities, you can evaluate their potential level of influence for your brand.
10. Tools can't interpret results
Now let's assume that somehow the services you are using are 100% accurate, and you use social-listening services to gather data and provide initial analysis. Someone
still needs to analyze the results. Why did sentiment take a sharp drop on that day back in February? Who noticed it? What we can we do different to prevent a similar issue? Did it have a persistent
effect on your target demographic? It takes a smart strategist to understand and interpret the findings and translate them into actionable insights.
I'm a big fan of
services like Trendrr and rely on them daily to gather data and certainly encourage that they are used the same way. But smart social media researchers should ensure they are not relying on automated
data and sentiment scoring to understand what consumers are saying. Everyone enjoys a little Megan Fox here and there, but we have to accept that sometimes she's just not relevant.