Commentary

Terms Of Abuse

Data is the lifeblood of any algorithm.  To have newer forms of AI marketing platforms perform, it is essential to have access to multiple forms of data, which stem largely from user activity.   The fact that consumers generate and provide  this data is increasingly becoming a hot button.  

The “Face-alytic” maelstrom may someday be referenced as the real awakening of privacy awareness.  It took a Facebook-sized breach to shine a glaring light on the way personal data is gathered and used for marketing.  

And even the most knowledgeable data experts I’ve spoken with have been surprised at the level of data collected via Facebook’s messaging app and phone conversations.  

John Montgomery, executive vice president, global brand safety at GroupM, speculates about the Facebook accusations: “It is either an issue with the data sharing rules or that they were not able to apply the rules as intended. Probably the latter.

“The level of public outrage is magnified because Facebook is a social network that people have trusted with their most personal data and they are now starting to understand how much data is being collected and used, and it has been amplified into an emotional issue,” adds Montgomery.

But it’s curious to me that this wave of negativity about Facebook is not just about the violation of terms, but also in reaction to the conventional way data has been used by marketers all along, which is legally represented in the terms of service signed by users. Meaning Facebook’s terms weren’t understood -- probably (surprise, surprise) because they weren’t read.  

Seemingly no one ever reads user terms of service, which are practically impossible to get through -- typically, mini novellas in four-point type using difficult-to-understand legalese.  In his book “Future Crimes,” Marc Goodman quips that these contracts should be more aptly called “terms of abuse,” as they specifically tell the user how their data will be owned and used in myriad ways to the benefit of the company, and sometimes the detriment of the user.  

Goodman uses the example of LinkedIn, whose privacy policy states: “You grant LinkedIn a nonexclusive, irrevocable, worldwide, perpetual, unlimited, assignable, sublicenseable, fully paid up and royalty-free right for us to copy, prepare derivative works of, improve, distribute, publish, remove, retain, add, process, analyze, use and commercialize, in any way now known or in the future discovered, any information you provide directly or indirectly to LinkedIn.”  

We sign, because we haven't read or simply don’t have time to fight the points that might be unfair. Usually bad things don’t happen, but there is a question of whether these onerous agreements should be forced on people. Will consumers' newfound awareness about how data is used create more angst and reluctance to participate in social spaces?

Like the LinkedIn example, other digital terms of service are outrageously assumptive in laying claim to data.  Would you be surprised to know that photos sitting on your phone might be legally accessed to understand your relationship with brands you have photographed?  How would you feel if a film or book review you wrote was analyzed to create a psychological profile about you, the writer?  Even if you are not personally identified, this is mighty personal information.  

And who is to say that these attributes won’t relate directly to your identifiable profile -- especially if a first-party agreement exists? 

The reality is that data innovation using AI is laying bare details about users that they don’t even know themselves.  An AI company I recently met claims to be able to diagnose illnesses merely by listening to voices and analyzing them.  What if you signed an agreement allowing that to happen?  

In a MediaPostarticle written a year ago by Mike Azzara, Esther Dyson predicted, “The advertising community has been woefully unforthcoming about how much data they’re collecting and what they're doing with it. And it’s going to backfire on them, just as the Snowden revelations backfired on the NSA.”  Does it feel like we are getting dangerously close to this prediction?

Shelley Palmer’s recent blog post on this subject asserts that more understanding about data is a good thing, and users will ultimately have to get smart about finding ways to control their own data.  

Platforms such as Facebook will continue to leverage user data, but will need to make clear where the line is drawn, and let their users know the score.  Already, it has  promised to make privacy more straightforward and easier to understand.

“I think there is an opportunity to be completely transparent with how it’s done,” offers John Montgomery. 

Whatever happens next, consumers will be more tuned in to their data and how it’s used. So let’s please get it right.

5 comments about "Terms Of Abuse".
Check to receive email when comments are posted.
  1. Henry Blaufox from Dragon360, March 30, 2018 at 10 a.m.

    Since Facebook has collected and sold (well, rented) user data for years, I wonder how much of the sudden uproar over the CA incident is due to the way it exploded across the social media landscape - the Trump connection so touted by CA's Alexander Nix last year as a way to promote his firm. In fact, CA by itself is a burp in a blizzard. Numerous firms, most substantially larger and with global presence, paid to use Facebook data in conjunction with their their own data sets for targeting and segment identification. The third party relationships are now under examination, and changes to the business uses are underway. It will not be as easy for marketers to combine, mine and analyze data for lookalike modeling and so on for the foreseeable future.

    As for Goodman's LinkedIn example, should we look at the quoted terms of use as just on side of the arrangement? The other side, the user, gets in return "free exposure and professional promotion of myself and my abilities among my peers and industry groups, whether closely connected or not."

  2. Sarah Fay from Glasswing Ventures, March 31, 2018 at 9:12 a.m.

    Henry, thanks for these great observations.  Yes, I believe the election is at the heart of the uproar, which I might have explored in more detail if these articles were meant to be longer.  To me, "fake news" is a bigger violation of trust than misuse of data (although it is all tied together.)  And you are right that CA just offers an example of how all this data can be applied.  There are MANY more platforms poised to do similar things.

    As for the LinkedIn example, I agree with you there as well.  You and I are obviously still on LinkedIn despite the fact we know their terms, so the trade off must be okay with us.  We might think twice before sharing a document we wanted to maintain exclusive rights to though.  So awareness of what we have signed may play a role in our behavior.  Going forward, people may think more about their actions and how data collection is affected, and perhaps what they might get in return.

  3. Andrew Susman from New Value Associates, April 2, 2018 at 10:03 a.m.

    Hello Sarah>  This was a wonderful and timely piece.  Two Words:  "Meaningful Disclosure."

  4. Sarah Fay from Glasswing Ventures, April 2, 2018 at 3:34 p.m.

    Andrew, yes to meaningful disclosure, and possibly incentives to sharing more data...

  5. Patrick Stroh from Brunner / data science, analytics, April 19, 2018 at 2:12 p.m.

    Good article.  Thanks.  Will share.

Next story loading loading..