Plenty of people were understandably horrified. Motherboard’s Samantha Cole declared that, “AI-Assisted Porn Is Here and We’re All F***ed.” Rolling Stone’s Kirsten Dold called it “a terrifying snapshot of what machine-learning technology can accomplish in the wrong hands.”
By February, Reddit had banned the subreddit, calling it “involuntary pornography” -- but the artificially intelligent cat was out of the face-swapping bag. Deepfakes (the name of the original user has now become the name of the genre) are here to stay.
If involuntary pornography doesn’t seem particularly terrible, imagine if it happened to you — or to your daughter, or to your wife. It is, plainly, a violation, with potentially devastating consequences to the individuals involved.
Deepfakes isn’t just about porn, either. With enough reference imagery, you can use the technique to create video of almost anyone doing or saying almost anything.
There are, however, people out there who still care about old-fashioned values like “truth” and “reality.” And so a war is being fought, on multiple fronts.
Some, like Mary Anne Franks, a technology law professor at the University of Miami and advisor for the Cyber Civil Rights Initiative, say tech gatekeepers like Google and Facebook have to downgrade or deindex this kind of content.
Others, like the Pentagon’s Defense Advanced Research Projects Agency (DARPA), are working on AI that can recognize AI: machine learning software that can automatically detect artificially generated videos. And others are working on awareness and education -- like this video of “Barack Obama.”
These measures all seem woefully inadequate. The DARPA technology, for example, currently relies on the fact that deepfake videos don’t blink as often as real ones. But already a new technology called deep video portraits addresses that issue.
It’s a cat-and-mouse game that will only get worse. And the advantage is fully with the fakes. You don’t need a perfect fake video to spread a rumor, sow distrust, feed people’s fears and biases, or undermine attempts at common ground. An okay fake video will do nicely. As Jonathan Swift wrote, “if a Lie be believ’d only for an Hour, it has done its Work, and there is no farther occasion for it. Falsehood flies, and the Truth comes limping after it; so that when Men come to be undeceiv’d, it is too late; the Jest is over, and the Tale has had its Effect.”
Which is why it is so intensely disturbing that Sarah Huckabee Sanders tweeted a doctored video of Jim Acosta, making it look like his encounter with an aide was worse than it was.
Like the fake interview of Alexandria Ocasio-Cortez from July, the Acosta video is not a deepfake or a deep video portrait. It doesn’t use any AI at all. It simply shows how easy it is to undermine the truth. If we can fall for the Acosta and the Ocasio-Cortez videos, imagine when deepfakes properly permeate the political sphere.
If a government agency shares falsified videos and presents them as the truth, there must be consequences. Today, it’s Jim Acosta’s arm coming down slightly faster that it actually did. Tomorrow, he’s asking things he never asked, or doing things he never did.
We are altogether too blithe with our usage of the term “post-truth.” It’s time for us to decide whether that is actually what we want.