Ever heard of Strava? The likelihood that you would say yes jumped astronomically on Jan. 27 -- the day of the Strava security breach. Before that, you had probably never heard of it, unless you happened to be a cyclist or runner.
I’ve talked about Strava before. Then, I was talking about social modality and trying to keep our various selves straight on various social networks. Today, I’m talking about privacy.
Through GPS-enabled devices, like a fitness tracker or smartphone, Strava enables you to track your workouts, include the routes you take. Once a year, it aggregates all these activities and publishes it as a global heatmap. Over 1 billion workouts are mapped in every corner of the earth. If you zoom in enough, you’ll see my favorite cycling routes in the city I live in. The same is true for everyone who uses the app -- unless, of course, you’ve opted out of the public display of your workouts.
And therein lies the problem. Actually, two problems.
First, problem number one. There is really no reason I shouldn’t share my workouts. The worst you could find out is that I’m a creature of habit when working out. But if I’m a marine stationed at a secret military base in Afghanistan and I start my morning jogging around the perimeter of the base -- well, now we have a problem. I just inadvertently highlighted my base on the map for the world to see.
And that’s exactly what happened. When the heatmap went live, a university student in Australia happened to notice there were a number of hotspots in the middle of nowhere in Afghanistan and Syria.
On to problem number two. In terms of numbers affected, the Strava breach is a drop in the bucket when you compare it to Yahoo, Equifax, Target -- or any of the other breaches that have made the news. But this breach was different in a very important way. The victims here weren’t individual consumers. This time national security was threatened. And that moved it beyond the typical “consumer beware” defense that typically gets invoked.
This charts new territory for privacy. The difference in perspective in this breach has heightened sensitivities and moved the conversation in a new direction. Typically, the response when there is a breach is:
a) You should have known better.
b) You should have taken steps to protect your information. Or,
c) Hmmm, it sucks to be you
Somehow, these responses have continued in previous breaches, despite the fact we all know it’s almost impossible to navigate the minefield of settings and preferences that lies between you and foolproof privacy. As long as the victims were individuals, it was easy to shift blame.
This time, however, the victim was the collective “we,” and the topic was the hot button of all hot buttons: national security.
Now, one should argue that all those responses above might apply to the unfortunate soldier who decided to take his Fitbit on his run, but I don’t think it will end there. I think the current “opt out” approach to net privacy might have to be considered.
The fact is, all these platforms would prefer to gather and have the right to use as much as they see fit of your data as possible. Doing so opens up a number of monetization opportunities for them. Typically, the quid pro quo offered back to you, the user, is more functionality and the ability to share to your own social circle.
The current ecosystems' default starting point is to enable as much sharing and functionality as possible. Humans being human, we will usually go with the easiest option -- the default -- and only worry about it if something goes wrong.
But as users, we do have the right to push back. We have to realize that opening the full data pipe gives platforms much more value than we ever receive in return. We’re selling off our own personal data for the modern-day equivalent of beads and trinkets.
And the traditional corporate response -- “you can always opt out if you want” -- is simply taking advantage of our own human limitations. The current fallback is that they're introducing more transparency into their own approaches to privacy, making it easier to understand.
While this is a step in the right direction, a more ethical approach would be to take an opt-in approach, where the default is the maximum protection of our privacy and we have to make a conscious effort to lower that wall.
We’ll see. Opting in puts ethics and profitability on a collision course. For that reason, I can’t ever see the platforms going in that direction unless we insist.