Commentary

10 Reasons Why Automated Social Media Listening Tools Fail

Trend

Many listening platforms tout automated sentiment and natural language processing as premium features. But they are insufficient means to reach meaningful results. Skilled analysts must weed through spam and irrelevant posts, score the conversation for sentiment, categorize topics of conversations and translate key findings into actionable insights.

A recent example of a company trying to utilize automated sentiment is paidContent. Yesterday, it promoted its new index called "Social Standing," which leverages Trendrr's social-measurement technology to create a scoreboard that tracks social sentiment for top entertainment and media companies and brands. Visitors can view changes in sentiment by day, week and month.

 
PaidContent is a valuable news source that I respect and read daily, so I was happy to see it embrace social media analysis and offer this great new feature. But once I started to dig a little deeper and look through the conversations behind the analysis, I was surprised to see the data. Even the most novice marketer can see that most of the tweets are spam or irrelevant. For example, here is a sample of tweets (see Fox News imagery above) that went into the calculation for the sentiment score for FOX (as in "Fox" television).

 

It's clear that the technology is procuring every tweet mentioning "fox" in any language (e.g. Megan Fox appears a couple of times).  If the listening service can't collect relevant posts, how can we trust that the language processing is going to understand sarcasm, cultural references, slang, abbreviations or any other linguistic nuance?  We can't. Technology alone is not sufficient to truly understand social media conversations.

 

Of course, Trendrr is just one product among many listening technologies available, and these issues aren't exclusive to them - they are industry-wide. While every service claims to have the most accurate sentiment gauge, the fact is that nothing beats human analysts.

 

Here are 10 reasons why automated social media listening tools fail:

 

1.     Spam posts are counted

The tools have difficulty weeding out all posts created by bots. The spam is then included in the analysis, thus skewing the results.

2.     Keywords have multiple meanings

In the example from paidContent and Trendrr, terms like Megan Fox, the foxtrot, the animal fox, the slang fox, etc. are all blindly being counted as a relevant terms for the "Fox" network.

3.     You need the human touch to refine your data collection

You may need to redefine your keyword queries to target the right conversations for your analysis. Filtering posts and readjusting your search is a critical listening process.

4.     Context is everything

It's not enough to read just the excerpt or post mentioning a brand. You need to take a step back. See if you can gain any additional insight about the conversation, based on the person who mentions it, where the post was located (i.e. what site or blog type), if the comment was a response to an earlier thought, etc.

5.     ZOMG!  Msgs r always EZ 2 read #JK

Current technology is not sophisticated enough to understand sarcasm, slang, cultural references, abbreviations, phonetics, idioms, and other linguistics nuances. 

6.     A picture is worth a thousand words

Keyword-based tools cannot analyze images, videos or rich media next to a post, which may completely change the context or tone.

7.     Sentiment scoring is L

How do you quantify the terms "like," "hate," "love," "LOVE!!!!," "meh," "ugggggggggggggh," "blech," or "L?"  In different contexts, these words can have a completely different meaning.

8.     Key consumer information is lost

Psychographic, technographic and ethnographic insights about your consumer can be gained by visiting their blog, Twitter page, community, etc. Technology does not take this extra step.

9.     True influencers and brand advocates fall through the cracks

At its most basic, automated tools judge all mentions with equal weight -- whether from a bot, a person or company. More advanced services might assign different weights based on readership or Twitter followers, but they aren't taking into account true influence or pertinent qualifications of a brand advocate. By reviewing their other conversations, engagements and digital activities, you can evaluate their potential level of influence for your brand.

10.  Tools can't interpret results

Now let's assume that somehow the services you are using are 100% accurate, and you use social-listening services to gather data and provide initial analysis. Someone still needs to analyze the results. Why did sentiment take a sharp drop on that day back in February? Who noticed it? What we can we do different to prevent a similar issue? Did it have a persistent effect on your target demographic? It takes a smart strategist to understand and interpret the findings and translate them into actionable insights.

I'm a big fan of services like Trendrr and rely on them daily to gather data and certainly encourage that they are used the same way. But smart social media researchers should ensure they are not relying on automated data and sentiment scoring to understand what consumers are saying. Everyone enjoys a little Megan Fox here and there, but we have to accept that sometimes she's just not relevant.

 

 

 

 

5 comments about "10 Reasons Why Automated Social Media Listening Tools Fail".
Check to receive email when comments are posted.
  1. Shawn Rutledge, April 28, 2011 at 7:51 p.m.

    Hi John,

    You nailed the *key* problem for listening platforms. It’s not as sexy as sentiment or influence, but Relevance is the critical foundation that enables real insight from social media data. Despite massive advancements in search (information extraction, information retrieval, graph mining) in the past decade, very little has made it into most listening platforms. Simple Boolean queries and basic spam filters prevail. There are also quite a few NLP and text analytics vendors that are now selling into social media but have technology developed for traditional text sources. These advanced but brittle techniques also fail on informal and colloquial social media content.

    By marrying deep social media expertise with deep expertise in information retrieval, machine learning, data mining and text analytics, you can achieve very powerful tools that work well with the kinds of problems you mention (relevance, disambiguation, context, colloquial language, sentiment, psychographic, influence, etc.). However, they are still only tools and framing the problem, interpreting the results, and leveraging the insights still takes smart people. Rigor in data quality and management of selection bias are critical to actionable social analytics.

    Advanced social media analytics platforms can succeed. I talk about a recent example here: http://www.visibletechnologies.com/commentary/humans-vs-ai-in-a-sentiment-bout-and-the-winner-is%E2%80%A6/

    Shawn

  2. Carolee Sherwood from Media Logic, April 29, 2011 at 4:35 p.m.

    I especially like the point about context.

    I had a conversation at home recently that's a good metaphor for what you are talking about. My son is fascinated with rare coins lately. It's all he talks about (when he's not talking about hockey). My husband told me he thought maybe our son would be interested in books about coin collecting. I said, "No. He's not interested in coins; he's interested in get-rich-quick schemes."

    I knew what was behind the words -- he was crossing his fingers he'd find in someone's piggy bank the million dollar prize. He'd be just as interested in playing the lottery. It was the pay-off he was after, not the coin.

    A listening tool would miss that. And not know how to reach back out with something relevant. At Media Logic, we talk a lot about the limitations of listening; as a stand-alone strategy it just doesn't work. Surprise -- not really, of course -- you need people in your social media plan!

    You've got it right in this piece: There's lots about listening you can't automate!

  3. Paul Wittenberg from PWSMC, May 3, 2011 at 5:25 p.m.

    First off, Shawn is a smart guy and knows what he is talking about.

    Regarding a tool failing, that is hard to imagine. Tools just do what they do. What really fails is the people that use them in an improper way. Much like a hammer, you can use it to hit a nail or hit yourself on the head. In either case, you can't blame the hammer.

    For those that understand listening tools and their capabilities, these tools can provide significant value. For those expecting the tool to do the job all by themselves, well, it is a bit like expecting the hammer to hit the nail with no human intervention.

    These listening tools provide significant value in the right hands (which is what Shawn said with bigger words).

  4. Sergio D'argenio from Dow Jones, May 4, 2011 at 3:11 a.m.

    Hi thanks for the article!

    Shawn, I really liked your comment. Desambiguation and relevance can be controlled during the information retrieval process to a satisfactory extent.

    In Google, which has few boolean operators enabled for complex searches, I can desambiguate the query FOX in twitter and retrieve a vast majority of relevant results.

    I tried the following query and most of the results (I checked at least 10 serps at random) would be relevant to the query:
    +Fox +sports OR +news OR +network -megan site:twitter.com
    (I could have added more desambiguation strings to make the query more inclusive/exclusive, but this was only a test).
    If we could use more operators (e.g. proximity ones), we would be able to improve the results.

    Anyhow, to make a complex search string and to collect intelligence you need a human, so what the author wrote makes perfect sense.
    Sergio

  5. Vishesh Bhavsar from Motif India Infotech, May 26, 2011 at 5:35 a.m.

    Hi John,

    This is a great article, and it represents the Social media monitoring and listening tools just as is. I want to add a few of my opinions as well. Before that, I will explain some of my Social Media background.

    I am working with one of the outsourcing companies in India, providing social media listening services. As we all know, social media listening is one of the very critical practice an organization or the marketing department would like to monitor on daily bases. Tools like SM2, Radian6, can help you get a great amount of buzz, and very easily. However, as you mentioned, the relevance ratio of these buzz is hardly 30% (just being a little liberal), that also when you have fed a very specific and complex boolean search string. As you mentioned, the tools are not designed to catch human sentiments precisely, there are obvious chances of them to mess up.

    As per my observation, tools are only able to give you 15% again, being liberal) data accurately. About 85% of the analysis thrown by any tool is incorrect, no matter how sophisticated the tool is. As you rightly mentioned, you will always require human intervention to bring out correct sentiment and actionable insights.

    What I see is, you require to have cleaning up speedy mechanism, in house, to make sure that the buzz captured is relevant to the scope of report. This mechanism can be either all human process or a combination of human and automation process.

    Now, that brings another challenge. you can not combine both, the tool and the cleaning up process, and feed that in to the tool back again and work on that using the tool. So, the later part, the analysis, becomes an independent process all together.

    What I see is, these tools are great to fetch data from any corner of internet (provided that corner is defined in the tools database). And than, use that data in a second process which will let you analyze and draft report out of it.

    I hope what I said makes sense to all the readers. :-)

    Thanks,
    Vishesh

Next story loading loading..