If advanced audience-targeting platform Cambridge Analytica was -- as critics have charged -- used as a form of “weaponized propaganda” during the 2016 U.S. presidential campaign, then
Facebook was its ammunition.
That's one of the epiphanies revealed in a series of stories published by the New York Times over the weekend, which reported that Cambridge Analytica “harvested
private information from the Facebook profiles of more than 50 million users without their permission,” which the paper described as a data “breach” enabling those users to be
targeted based on their “private social media activity."
That Cambridge Analytica was opportunistic in exploiting the data isn’t that enlightening, but the fact that Facebook has
done nothing to recover it raises new questions about the role the omnipotent social media network plays in manipulating how people feel, think and behave.
advertisement
advertisement
Meanwhile, a second epiphany
revealed by the Times reporting raises questions about exactly who Cambridge Analytica was working for: no, not billionaire Republican donor Robert Mercer, who invested in the firm and helped
the Trump campaign team exploit its power, but an even more nefarious source.
Despite claims to the contrary, the Times reported that Cambridge Analytica had conversations with Russians interested in influencing the
election and an affiliated company, SCL Group, has worked for Russians.
The revelations raise new questions not just for Cambridge Analytica or Facebook, but the entire programmatic
media industry: Are there ethical, or even national security imperatives, that should check the constant push to innovate at all costs? Especially now, as we are beginning to leverage new data, new
forms of machine learning and even more advanced AI, that can effectively weaponize our own behaviors and identities against us?
Last fall, as evidence began mounting that it
wasn’t just Trump campaign operatives exploiting this technology, but also hostile foreign powers, I described the emergence of a new acronym in ad-tech’s lexicon: “WMD,” as in
weapon of mass destruction. This weekend’s revelations affirm that acronym is apt.
Meanwhile, the industry, regulators and the public need to think about how to hold those
platforms accountable. Even if it was nothing more than negligence, Facebook’s failure to respond to a 50-million-user data breach, on top of other recent transgressions, raises a new level of
onus for the social network.
Whether regulators move in any meaningful way, let me conclude by asking a question directly of our readers: Is there a role for us -- advertisers,
agencies, and all the ad -tech middlemen and third-party processors that do business with Facebook -- to call for changes and better safeguards?
For all ad industry’s talk
about not supporting unsavory environments -- whether it is questionable content, piracy, malware and organized crime -- shouldn’t there also be some industry ethics against supporting a
platform that can play a role in destabilizing democracy, and does nothing about it after the fact?