Whether we like it or not, online connection engenders some decidedly bad behavior. It’s one of those unintended consequences that I like to talk about -- a behavioral side effect that’s catalyzed by technology. And, if this is the case, we should know a little more about the psychology behind this behavior.
Modified Mob Behavior
So, when does a group become a mob? And when does a mob turn ugly? There are some aspects of herd mentality that seem particularly conducive to online connections. A group turns into a mob when their behaviors become synced to a common purpose. A recent study from the University of Southern California found two predictive signals in social media behavior that indicate when a group protest may become a violent mob.
1)Tipping over the threshold from an opinion to a moral. When we go from talking about our opinions to preaching morality, things can take a nasty turn. Let’s imagine a spectrum from loosely held opinions on the left end -- things you’re not that emotionally invested in -- to beliefs, and then on the morals, at the right end.
This progression also correlates to different ways the brain processes respective thoughts. At the least intense left end of the spectrum, opinions, we can process thoughts with relatively detached rationality. But as we move to the right, different parts of the brain start kicking in and begin to raise the emotional stakes.
When we believe we’re talking about morals, we're “concerned with the principles of right and wrong behavior and the goodness or badness of human character,” according to the dictionary definition of the word.
This triggers our ancient and universal feelings about fairness, betrayal, subversion and degradation -- the planks of moral foundation theory. The researchers in the USC study found that people are more likely to endorse violence when they moralize the issue. When there are clearly held beliefs about right and wrong, violence seems acceptable.
2) Violence needs company. This moralizing signal is not necessarily tied to being online -- but the second predictive signal is. The researchers also found that if people believe others share their views, they are more likely to tip over the threshold from peaceful protest to violence. This is Mark Granovetter’s crowd threshold effect that I’ve talked about before.
In social media, this effect is amplified by content filtering and the structure of your network. Like-minded people naturally link to each other, and their posts make for remarkably efficient indicators of their beliefs. It’s very easy in a social network to feel that everybody you know feels the same way that you do. The degree of violent language can escalate quickly through online posts -- until the entire group is pushed over the threshold into a model of behavior that would be unthinkable as a disconnected individual.
Trolls, Trolls Everywhere
Another study, this time from Stanford, shows that any of us can become a troll. We would like to think that trolls are just a particularly ubiquitous small group of horrible people. But this research indicates that trollism is more situational than previously thought. In other words, if we’re in a bad mood, we’re more likely to become a troll.
But it’s not just our mood. Here again Granovetter’s threshold model plays a part. Negative comments beget more negative comments, starting a downward spiral of venom.
The researchers did a behavioral test where participants had to do either an easy or a difficult task and then had to read an online article that had either three neutral comments or three negative, troll-like comments.
The results were eye-opening. In the group that was assigned an easy task and read the article that had the neutral comments, about 35% posted a negative comment. Knowing that one in three of us seem to have a low threshold for becoming a troll is not exactly encouraging, but it gets worse.
If participants either did the difficult test or read negative comments, the likelihood for posting a troll-like comment jumped to 50%. And if participants got both the difficult test and read negative comments, the number climbed to 68%!
In the three-part study, another factor that could lead to becoming a troll included the time that posts were made. Late Sunday and Monday nights are the worst time of the week for negative posts, and Twitter bullying hit its peak between 5 p.m. and 8 pm on Sunday. While we’re on the subject, Donald Trump tends to tweet early in the morning, and his most inflammatory tweets come on Saturdays.
But when it comes to trolling, there’s something else at play here. Yet another study, this time from Mt. Helen University in Australia, found that our own brand of empathy can also predict whether we’re going to become a troll or not.
There are two kinds of empathy. Cognitive empathy means you can understand other people’s emotions -- you know what will make them happy or mad. But affective empathy means you can internalize and experience the emotions of another: If they’re happy, you’re happy.
Not surprisingly, trolls tend to have high cognitive empathy, but low affective empathy. Obviously, there were plenty of such people before the Internet, but they’ve now gained the perfect forum for their twisted form of empathy. They can incite negativity relatively free from social consequence and reprisal. Even if the comments made are not anonymous, the poster can hide behind a degree of detachment that would be impossible in a physical environment.
Why should we care about this? Again, it comes back to the unintended social consequences of technology. Increasingly, our connections are digital in nature. And for reasons already stated, I worry that these types of connections may bring out the worst in us.
Starting to see growing negative consequences of technology. This dovetails similarly with Cory's topic today.