Let’s play “What If” for a moment. For the last few columns, I’ve been pondering how we might more efficiently connect with digital information. Essentially, I see the stripping away of the awkward and inefficient interfaces that have been interposed between that information and us. Let’s imagine, 15 years from now, that Google Glass and other wearable technology provides a much more efficient connection, streaming real-time information to us that augments our physical world. In the blink of an eye, we can retrieve any required piece of information, expanding the capabilities of our own limited memories beyond belief. We have perfect recall, perfect information -- we become omniscient.
To facilitate this, we need to move our cognitive abilities to increasingly subterranean levels of processing – taking advantage of the “fast and dirty” capabilities of our subliminal mind. As we do this, we actually rewire our brains to depend on these technological extensions. Strategies that play out with conscious guidance become stored procedures that follow scripts written by constant repetition. Eventually, overtraining ingrains these procedures as habits, and we stop thinking and just do. Once this happens, we surrender much of our ability to consciously change our behaviors.
advertisement
advertisement
Along the way, we build a “meta” profile of ourselves, which acts as both a filter and a key to the accumulated potential of the “cloud.” It retrieves relevant information based on our current context and a deep understanding of our needs, it unlocks required functionality, and it archives our extended network of connections. It’s the “Big Data” representation of us, condensed into a virtual representation that can be parsed and manipulated by the technology we use to connect with the virtual world.
In my last column, Rob Schmultz and Randy Kirk wondered what a world full of technologically enhanced Homo sapiens would look like. Would we all become the annoying guy in the airport that can’t stop talking on his Bluetooth headset? Would we become so enmeshed in our digital connections that we ignore the physical ones that lie in front of our own noses? Would Google Glass truly augment our understanding of the world, or iwould it make us blind to its charms? And what about the privacy implications of a world where our every move could instantly be captured and shared online -- a world full of digital voyeurs?
I have no doubt that technology can take us to this not-too-distant future as I envisioned it. Much of what’s required already exists. Implantable hardware, heads up displays, sub-vocalization, bio-feedback -- it’s all very doable. What I wonder about is not the technology, but rather us. We move at a much slower pace. And we may not recognize any damage that’s done until it’s too late.
The Darwinian Brain
At an individual level, our brains have a remarkable ability to absorb technology. This is especially true if we’re exposed to that technology from birth. The brain represents a microcosm of evolutionary adaption, through a process called synaptic pruning. Essentially, the brain builds and strengthens neural pathways that are used often, and “prunes” away those that aren’t. In this way, the brain literally wires itself to be in sync with our environment.
The majority of this neural wiring happens when we’re still children. So, if our childhood environment happens to include technologies such as heads-up displays, implantable chips and other direct interfaces to digital information, our brains will quickly adapt to maximize the use of those technologies. Adults will also adapt to these new technologies, but because our brains are less “plastic” than that of children, the adaption won’t be as quick or complete.
The Absorption of Technology by Society
I don’t worry about our brain’s ability to adapt. I worry about the eventual impact on our society. With changes this portentous, there is generally a social cost. To consider what might come, it may be beneficial to look at what has been. Take television, for example.
If a technology is ubiquitous and effective enough to spread globally, like TV did, there is the issue of absorption. Not all sectors of society will have access to the technology at the same time. As the technology is absorbed at different rates, it can create imbalances and disruption. Think about the societal divide caused by the absorption of TV, which resulted in completely different information distribution paradigm. One can’t help thinking that TV played a significant role in much of the political change we saw sweep over the world in the past 3 decades.
And even if our brains quickly adapt to technology, that doesn’t mean our social mores and values will move as quickly. As our brains rewire to adapt to new technologies our cultural frameworks also need to shift. With different generations and segments of society at different places on the absorption curve, this can create further tensions. If you take the timeline of societal changes documented by Robert Putnam in “Bowling Alone” and overlay the timing of the adoption of TV, the correlation is striking and not a little frightening.
Even if our brains have the ability to adapt to technology, it isn’t always a positive change. For example, there is compelling evidence that early exposure to TV has contributed to the recent explosion of diagnosed ADHD and possibly even autism.
Knowing Isn’t Always the Same as Understanding
Finally, we have the greatest fear of Nicholas Carr: maybe this immediate connection to information will have the “net” effect of making us stupid -- or, at least, more shallow thinkers. If we’re spoon-fed information on demand, do we grow intellectually lazy? Do we start to lose the ability to reason and think critically? Will we swap quality for quantity?
Personally, I’m not sure Carr’s fears are founded on this front. It may be that our brains adapt and become even more profound and capable. Perhaps when we offload the simple journeyman tasks of retrieving information and compiling it for consideration to technology, our brains will be freed up to handle deeper and more abstract tasks. The simple fact is, we won’t know until it happens. It could be another “Great Leap Forward,” or it may mark the beginning of the decline of our species.
The point is, we’ve already started down the path, and it’s highly unlikely we’ll retreat at this point. I suppose we have no option but to wait and see.
Interactive digital media is infinitely better for our brains than TV. Studies show that as little as 1 hour of TV a day (regardless of quality of content) can permanently impair frontal lob development in small children, leaving them with poor impulse control for life. And TV is like the Sirens of Greek mythology who used their beautiful singing to lure sailors onto the rocks to their deaths. I remember seeing interviews of 5-year-olds years ago who were being asked what they would give up before they would give up TV. Their answers were bone-chilling. A lot of them would give up everything--toys, friends, pets, family--before they would give up TV. TV has also been inked to Alzheimer's in older people. All this ubiquitous digital technology is very new in the human experience, and we are still groping our way to the right kind of relationships with it. I'm a 10,000-foot-view type, and I like to think of computers as keeping track of all the details for us, freeing up our brains to concentrate on the bigger picture, and on creative applications of technology. Maybe computers will put an end to the very limiting learning-by-rote approach to education. Anyway, I agree: This genie is not going back into the bottle, so we need to figure out how to leverage it effectively. (Full disclosure: I'm a technophile....)