One thing that never fails to amaze me about the human race is how predictably we lie to ourselves. I recently attended a talk by
Peter de Jager, a change
management guy, in which he pointed out that any argument against the uptake of a new technology that involves its size or price is useless. "Nobody will ever use a computer at home because they are
way too big and expensive," etc. We laugh about those statements now, and yet we continue to say similar things about newer technologies.
I like the way de Jager thinks, and I'm going to
apply his syntactical model to our online lives. Here's Colbin's Axiom: Any argument against the uptake of a new online service that involves its invasion of privacy is
useless.
All anyone has to do is look at Facebook's history to see Colbin's Axiom in action. Imagine going back 10 or 20 years to explain to people that, in 2010, 500 million folks
will share all the most intimate details of their lives in an at least semi-public forum. Tell them that only a fraction of a percent of this population will even notice that there was
something inappropriate about Beacon, or that the changing of the default settings to the nearly-impossible-to-opt-out everything-is-public-to-everyone mode was an egregious privacy violation.
advertisement
advertisement
Tell them that millions will be tweeting their most random thoughts. Tell them that Google will be storing their search histories and clickstreams and personalizing results and ads even when they're
not logged in.
Nobody would believe you. And, just as we display 20/20 hindsight when it comes to the size and cost of new technologies, we laugh or tut-tut at the naiveté of these
earlier versions of ourselves -- and then turn around and continue to display the same naiveté.
At least, I do. When Eric Schmidt came out in the Wall Street Journal last week saying that most people "want Google to tell them what they should be
doing next," I thought, "Like heck I do!" When author Holman Jenkins envisioned a not-too-distant future in which "[i]f you need milk and there's a place nearby to get milk, Google will remind you to
get milk. It will tell you a store ahead has a collection of horse-racing posters, that a 19th century murder you've been reading about took place on the next block," I thought, "Nobody will ever go
for that!"
It's one thing for Amazon to extrapolate that, because I like one book, chances are I'll like another, or to tell me what percentage of people who viewed this item ended up
purchasing it. It's quite another for technology to engage with me by connecting the content of the book across platforms or all the way into the physical world. It seems
invasive.
But if there's one thing the human race has proven in the Age of Zuckerberg, it's that our limits of invasiveness are a lot more fungible than we ever could have imagined.
The Journal article seemed creepy, but imagine if a press release came out tomorrow: "Foursquare and Amazon partner for real-time/real-world recommendation app." Doesn't sound particularly
far-fetched, does it?
Nope, the privacy arguments don't stand up to behavioral realities. Instead, it's execution and expectation that make or break our online services. When we're on
Foursquare, we expect the service to know where we are (that's the point, duh); likewise if we've checked into Facebook Places. For Google to alert you to those nearby horse posters, they're going to
have to get with the location-based service program, pronto.
Would you check in with Google? Or is it just not that kind of relationship? I look forward to your comments, here or via
@kcolbin.