I’ve talked in the past about taking a Bayesian approach to strategy. The more I explore this idea, the better I like it. But it comes with some challenges – the biggest being that we’re not Bayesian by nature. In fact, there’s a cognitive bias roughly the size of a good-sized cow barn that often leaves us blind to the true state of affairs. In psychological circles, it’s called Confirmation Bias, and in a comprehensive academic review in 1998, Raymond Nickerson stated its potential negative impact, “If one were to attempt to identify a single problematic aspect of human reasoning that deserves attention above all others, the confirmation bias would have to be among the candidates for consideration."
Here’s the thing. We love to be right. We hate to be wrong. So we will go to extraordinary lengths to make sure that we’re proven correct. And we won’t even know we’re doing it. Our brain, working surreptitiously in the background, doesn’t alert us to how biased we actually are. The many tricks that go along with Confirmation Bias usually play out subconsciously.
If we try to be good little Bayesians, we have to embrace alternative ideas of all shapes and sizes, whether or not they agree with our current view of things. In fact, we should be prepared to rip our current view apart, as it’s in the disproving and rebuilding of hypotheses that the truth is eventually found.
Here’s where things go wrong in most market testing. We usually test to prove our hunches right. We go in with a favored option and try to build a case for it. We may deny it, but we all do it. That means that the less favored alternatives usually get short shrift. And it’s often in one of these alternatives that the optimal choice may be found. The more that there is at stake in the test, the more susceptible we are to Confirmation Bias.
Here is the rogue’s gallery of typical Confirmation Bias tricks:
Favored Hypothesis Information Seeking and Interpretation – As I said, we tend to seek information that supports our favored hypothesis, and avoid information that would contradict it. In the Bayesian view, this is equivalent to ignoring likelihood ratios.
Preferential Treatment of Evidence Supporting Existing Beliefs – Even if we somehow collect unbiased information, we will tend to focus on the information that supports our favored view. It gets “over-weighted” in analysis.
Looking for Positive Cases – This is the classic trap of testing only for winners and ignoring the losers. Often, the losers can tell us more about the true state of affairs.
The Primacy Effect – We tend to pay more attention to the first information we look at, which can bias analysis of any subsequent information.
Belief Persistence – Even when the evidence mounts that our original hunch is wrong, we can be incredibly inventive in twisting evidentiary frameworks to provide continuing support. Along with this is another bias called the “Sunk Cost Fallacy.” The more we have invested in our original hunch (i.e. a major multimillion-dollar campaign that was launched based on it) the more tenacious we are in holding on to it.
Going back a few columns to Philip Tetlock’s Hedgehogs and Foxes, he found that Foxes make much better natural Bayesians. They are more open to updating their beliefs. The big takeaway here? Keep an open mind.