Commentary

The Hidden Agenda Behind Zuckerberg's 'Meaningful Interactions'

It probably started with a good intention. Facebook -- aka Mark Zuckerberg -- wanted to encourage more “meaningful interactions.” And so, early last year, Facebook engineers started making some significant changes to the algorithm that determined what you saw in your News Feed. Here are some excerpts from Zuck’s post to that effect:

“The research shows that when we use social media to connect with people we care about, it can be good for our well-being. We can feel more connected and less lonely, and that correlates with long term measures of happiness and health. On the other hand, passively reading articles or watching videos -- even if they're entertaining or informative -- may not be as good.”

advertisement

advertisement

That makes sense, right? It sounds logical. Zuckerberg went on to say how the company was changing Facebook’s algorithm to encourage more “meaningful interactions.”

“The first changes you'll see will be in News Feed, where you can expect to see more from your friends, family and groups.

As we roll this out, you'll see less public content like posts from businesses, brands, and media. And the public content you see more will be held to the same standard -- it should encourage meaningful interactions between people.”

Let’s fast-forward almost two years, when we see the outcome of that good intention: an ideological landscape with a huge chasm where the middle ground used to be.

The problem is that Facebook’s algorithm naturally favors content from like-minded people. And surprisingly, it doesn’t take a very high degree of ideological homogeneity to create a highly polarized landscape. This shouldn’t have come as a surprise. American economist Thomas Schelling  showed us how easy it was for segregation to happen almost 50 years ago.

The Schelling Model of Segregation was created to demonstrate why racial segregation was such a chronic problem in the U.S., even given repeated efforts to desegregate. The model showed that even when we’re pretty open-minded about who our neighbors are, we will still tend to self-segregate over time.

The model works like this. A grid represents a population with two different types of agents: X and O. The square that the agent is in represents where they live. If agents are satisfied, they will stay put. If they aren’t satisfied, they will move to a new location.

The variable here is the level of satisfaction determined by what percentage of their immediate neighbors are the same type of agent as they are. For example, the level of satisfaction might be set at 50%, where the X agent needs at least 50% of its neighbors to also be of type X. (If you want to try the model firsthand, Frank McCown, a computer science professor at Harding University, created an online version.)

The most surprising thing that comes out of the model is that this threshold of satisfaction doesn’t have to be set very high at all for extensive segregation to happen over time. You start to see significant “clumping” of agent types at percentages as low as 25%. At 40% and higher, you see sharp divides between the X and O communities. Remember, even at 40%, that means that Agent X only wants 40% of their neighbors to also be of the X persuasion. They’re okay being surrounded by up to 60% Os. That is much more open-minded than most human agents I know.

Let’s move the Schelling Model to Facebook. We know from the model that even pretty open-minded people will physically segregate themselves over time. The difference is that on Facebook, they don’t move to a new part of the grid, they just hit the “unfollow” button. And the segregation isn’t physical -- it’s ideological.

This natural behavior is then accelerated by the Facebook “Meaningful Encounter” algorithm, which filters on the basis of people you have connected with, setting in motion an ever-tightening spiral that eventually restricts your feed to a very narrow ideological horizon.

The resulting cluster then becomes a segment used for ad targeting. We can quickly see how Facebook both intentionally built these very homogenous clusters by changing its algorithm, and then profits from them by providing advertisers the tools to micro-target them.

Finally, after doing all this, Facebook absolves itself of any responsibility to ensure that subversive and blatantly false messaging isn’t delivered to these ideologically vulnerable clusters.

It’s no wonder comedian Sascha Baron Cohen just took Zuck to task, saying “if Facebook were around in the 1930s, it would have allowed Hitler to post 30-second ads on his ‘solution’ to the ‘Jewish problem.’”

In rereading Mark Zuckerberg’s post from two years ago, you can’t help but start reading between the lines. First of all, there is mounting evidence disproving his contention that meaningful social media encounters help your well-being. It seems quitting Facebook entirely is much better for you.

And secondly, I suspect that -- just like his defense of running false and malicious advertising by citing free speech -- Zuck has an not-so-hidden agenda here. I’m sure Zuckerberg and his Facebook engineers weren’t oblivious to the fact that their changes to the algorithm would result in nicely segmented psychographic clusters that would be like catnip to advertisers -- especially political advertisers. They were consolidating exactly the same vulnerabilities that were exploited by Cambridge Analytica.

They were building a platform that was perfectly suited to subvert democracy.
1 comment about "The Hidden Agenda Behind Zuckerberg's 'Meaningful Interactions'".
Check to receive email when comments are posted.
  1. John Grono from GAP Research, November 27, 2019 at 5:22 p.m.

    Nice post Gord.

    Here's an idea for a "meaningful interaction" - turn your device off and take the dog for a long walk.   Dogs don't fake.

Next story loading loading..