Commentary

Thank You For The Information

Just when you thought your point of view was safe inside your head, along comes shocking proof of algorithmic radicalization, in scale. A March 2018 New York Times article reports that YouTube will flood you with video suggestions curated to amplify and reflect your point of view — right back at you.  YouTube does this to keep your attention, ergo, advertising dollars. Ego selfie. 

This is not lost on Congress, which just introduced the Filter Bubble Transparency Act.   More on that later.  

But first, why does feeding you an amped-up version of what you like radicalize you?  Easy. A psychological staple called confirmation bias overwhelms our critical thinking. It’s easier to passively accept reinforcement of our beliefs than to proactively explore avenues that might lead to negating them. Maybe YouTube should have a button to sway the algorithm called “grow me,” instead of the implied default, “pander.”

advertisement

advertisement

The machine seeks to satisfy, to anticipate our every wish. We should be thankful because our information wants are being met with minimal effort. That’s good, right? 

Wrong. Telling us what we want to hear is dangerous in any case. 

The machine has needs, too.  It is deciding what you get, based on its own needs. All the better for the machine if you think you are getting your needs met, too! 

This raises the general question regarding the motivations of those who inject information into our heads. Spoiler alert: It gets ugly. 

In dog-eat-dog capitalism, it’s safe to assume any control you cede to someone (something?) else will be used to support their agenda, not yours. As a general rule for brain self-defense, the less you work to learn, the less likely the information will serve your interests. No pain, no gain. 

But the fire hose of new information coming in over screens has no historical analog. It renders us into evolutionary infants, striving to cope as new signals arrive.  We seek meaning. 

We experience child-like fascination with strange stories, and with taboo subjects like sex and guns. We see drama at every turn.  We see the emotions of others, but through a prophylactic wall of pixels. 

Anyone can receive any information. But how do we decode it?  Science says our brains try to fit the information into some sort of context frame.  It’s one way we construct our mental model of the world.

But cheap, prefab context frames (like “Democrat” or “crooked”) take the thinking out. #Entropywins.

With the overloaded inputs, we’d rather accept the provided context than work to find the right one. Worse, with reward and punishment controlled by how a machine responds to our behaviors, the web now works like a giant Skinner box,  giving us reinforcement when we behave.  Humans are the rats. 

Related, Congress recently introduced the Filter Bubble Transparency Act.  The name, by itself, leaves me reeling. The proposed law, as the name suggests, attempts to fix the filter bubble problem with transparency, meaning disclosure. The law would mandate a choice to decline to be served content based on data. 

Ugh! So now what? You will check the box that says don’t use data to serve me content? Then what? Receive random information? Clearly, this will not solve the larger problem, but accepting its existence is the first step.  

In other news, the U.S. Congress grapples with the issue of what’s OK to say (for money or not) on Facebook.  They should be reminded that the right question is not only whether Facebook has editorial obligations or rights. The higher order question is how information impinges on society, the national interest, and the Constitution. What should we control? What safeguards should we institutionalize?

For example, is promulgating the Hillary-running-sex-trafficking story different from yelling “fire” in a crowded theater? Both would certainly cause harm, and both were untrue. 

Limits on free speech rely on the idea that something untrue will cause a clear and present danger. From Wikipedia: “The original wording used in [the] opinion [on free speech] ('falsely shouting fire in a theatre and causing a panic') highlights that speech that is dangerous and false is not protected, as opposed to speech that is dangerous but also true.” Dangerous to whom, though? Present where? 

When a suspicious stranger can decide what to put into your head, it is time to consider limits to freedom. There is no simple answer, but it’s clear enough that the current state -- call it information anarchy -- is not serving our interests as a nation or a culture. 

Advertising, neither cause nor effect, is just caught in the crossfire.

3 comments about "Thank You For The Information".
Check to receive email when comments are posted.
  1. Douglas Ferguson from College of Charleston, November 8, 2019 at 10:06 a.m.

    In the final sentence, advertising is just caught? Where are the filters on advertising? I would like to click a box telling advertisers and agencies which ads are unwanted in my media experience. (And please include "all of the above" to the list.)

  2. John Grono from GAP Research, November 8, 2019 at 6:21 p.m.

    That filter is a very interesting idea Douglas.

    I can think of some extensions of it as well.

  3. Ted Mcconnell from Independent Consultant, November 9, 2019 at 2:35 a.m.

    Like I said, don't expect free content to be free of bias. Everything you read was paid for by something. Advertising happens to pay for most of what you see on the internet. 

Next story loading loading..