Last week, when I talked about the current furor around the Cambridge Analytica scandal, I said that part of the blame -- or at least, the responsibility -- for the protection of our own data belonged to us.
Reader Chuck Lantz responded with: “In short, just because a company such as FaceBook can do something doesn't mean they should. We trusted FaceBook and they took advantage of that trust. Not being more careful with our own personal info, while not very wise, is not a crime. And attempting to dole out blame to both victim and perpetrator ain't exactly wise, either.“
Whether it’s wise or not, when it comes to our own data, there are only three places we can reasonably look to protect it:
1) The government
One only has to look at the supposed “grilling” of Zuckerberg by Congress to realize how forlorn a hope this is. In a follow-up post, the Wharton business school at the University of Pennsylvania ran a list of the questions that Congress should have asked, compiled from its own faculty.
My personal favorite comes from Eric Clemons, professor of operations, information an decisions: “You benefited financially from Cambridge Analytica’s clients’ targeting of fake news and inflammatory posts. Why did you wait years to report what Cambridge Analytica was doing?”
Technology has left the regulatory ability to control it in the dust. The EU is probably the most aggressive legislative jurisdiction in the world when it comes to protecting data privacy. The General Data Protection Regulation goes into place on May 25 of this year and incorporates sweeping new protections for EU citizens. But it will inevitably come up short in three key areas:
a. Even though it immediately applies to all countries processing the data of EU citizens, international compliance will be difficult to enforce consistently, especially if that processing extends beyond “friendly” countries.
b. Technological loopholes will quickly find vulnerable gray areas in the legislation that will lead to the misuse of data. Technology will always move faster than legislation. As an example, the GDPR and blockchain technologies are seemingly on a collision course.
c. Most importantly, the GDPR regulation is aimed at data worst-case scenarios. But there are many apparently benign applications that can border on misuse of personal data. In trying to police even the worst-case instances, the GDPR requires restrictions that will directly impact users in the area of convenience and functionality. There are key areas such as data portability that aren’t fully addressed in the new legislation. At the end of the day, even though it’s protecting them, users will find the GDPR a pain in the ass.
Even with these fundamental flaws, the GDPR probably represents the world’s best attempt at data regulation. The U.S., as we’ve seen in the past week, comes up well short of this. And even if the people involved weren’t doddering, old, technologically inept farts, the mechanisms required for the passing of relevant and timely legislation simply aren’t there. It would be like trying to catch a jet with a lasso.
Should this be the job of government? Sure, I can buy that. Can government handle the job? Not based on the evidence we currently have available to us.
2, The companies that aggregate and manipulate our data
Philosophically, I completely agree with Chuck. As I said last week, the point of view I took left me ill at ease. We need these companies to be better than they are. We certainly need them to be better than Facebook was. But Facebook has absolutely no incentive to be better.
And my fellow Media Insider, Kaila Colbin, nailed this in her column last week: “Facebook doesn’t benefit if you feel better about yourself, or if you’re a more informed, thoughtful person. It benefits if you spend more time on its site, and buy more stuff. Giving the users control over who sees their posts offers the illusion of individual agency while protecting the prime directive.”
There are no inherent, proximate reasons for companies to be moral. They are built to be profitable (which, by the way, is why governments should never be run like a company).
Facebook’s revenue model is directly opposed to personal protection of data. And that is why Facebook will try to weather this storm by implementing more self-directed security controls to put a good face on things. We will ignore those controls, because it’s a pain in the ass to do otherwise. And this scenario will continue to play out again and again.
It sucks that we have to take this into our own hands. But I don’t see an option. Unless you see something in the first two alternatives that I don’t see, I don’t think we have any choice but to take responsibility. Do you want to put your security in the hands of the government -- or Facebook? The first doesn’t have the horsepower to do the job and the second is heading in the wrong direction.
So if the responsibility ends up being ours, what can we expect?
A few weeks ago, another fellow Insider, Dave Morgan, predicted that the moats around the walled gardens of data collectors like Facebook will get deeper. But the walled garden approach is not sustainable in the long run. All the market forces are going against it. As markets mature, they move from siloes to open markets. The marketplace of data will head in the same direction. Protectionist measures may be implemented in the short term, but they will not be successful.
This doesn’t negate the fact that the protection of personal information has suddenly become a massive pain point, which makes it a huge market opportunity. And like almost all truly meaningful disruptions in the marketplace, the ability to lock down our own data will come from entrepreneurialism.
We need a solution that guarantees universal data portability while at the same time maintaining control without putting an unrealistic maintenance burden on us. Rather than having the various walled gardens warehouse our data, we should retain ownership. That data should only be offered to platforms like Facebook on a case-by-case, need-to-know, transactional basis.
Will that solution be disruptive to the current social ecosystem? Absolutely. And that’s a good thing.
Advertising targeting is not a viable business model for the intertwined worlds of social connection and personal functionality. There is just too much at stake here. The only way it can work is for the organization doing the targeting to retain ownership of the data used for the targeting. And we should not trust them to do so in an ethical manner. Their profitability depends on them going beyond what is -- or should be -- acceptable to us.