You Can’t Believe Your Eyes
I just saw "Star Wars: The Last Jedi." When Carrie Fisher came on screen, I had to ask myself: Is that really her, or is that CGI? I couldn’t remember if she had the chance to do all her scenes before her tragic passing last year. When I checked, I found that it was actually her. But the very fact that I had to ask the question is telling. After all, "Rogue One" did resurrect Peter Cushing via CGI -- and he passed away 14 years ago.
CGI is not quite good enough that you can’t tell the difference between reality and computer generation, but that's only a hair’s breadth away. It’s definitely to the point where you can no longer trust your eyes. And that has some interesting implications.
You Can Now Put Words in Anyone’s Mouth
The "Rogue One" visual effects head, John Knoll, had to fend off some pointed questions about the ethics of bringing a dead actor back to life. He defended the move by saying “We didn’t do anything Peter Cushing would have objected to."
Whether you agree or not, the bigger question here is that the digital effects crew could have made the Cushing digital doppelganger do or say anything they wanted him to.
But It’s Not just Hollywood That Can Warp Reality
If fake reality comes out of Hollywood, we are prepared to cut it some slack. There's a long and slippery ethical slope that defines the entertainment landscape. In "Rogue One’"s case, it wasn’t using CGI to represent a human -- a move that includes a huge slice of today’s entertainment. It was using CGI to resurrect a dead actor and literally putting words in his mouth. That seemed to cross some ethical line in our perception of what’s real. But at the end of the day, this questionable warping of reality was still embedded in a fictional context.
But what if we could put words in the manufactured mouth of a sitting U.S. president? That’s exactly what a team at Washington University did with Barack Obama, using Stanford’s Face2Face technology. They used a neural network to essentially create a lip sync video of Obama, with the computer manipulating images of his face to lip sync it to a sample of audio from another speech.
Being academics, they kept everything squeaky clean on the ethical front. All the words were Obama’s -- it’s just that they were said at two different times. But those less scrupulous could easily synthesize Obama’s voice -- or anyone’s -- and sync it to video of him talking that would be indistinguishable from reality.
Why We Usually Believe Our Eyes
When it comes to a transmitted representation of reality, we accept video as the gold standard. Of all our five senses, we trust sight the most to differentiate what's real from what's fake. Photos used to be accepted as incontrovertible proof of reality, until Photoshop messed that up. Now, it’s video’s turn. Technology has handed us the tools that enable us to manufacture any reality we wish and distribute it in the form of video. And because it’s in that form, most everyone will believe it to be true.
The concept of a universally understood and verifiable reality is important. It creates some type of provable common ground. We have always had our own ways of interpreting reality, but at the end of the day, there was typically some way to empirically determine what was real, if we just bothered to look for it.
But we now run the risk of accepting manufactured reality as “good enough” for our purposes. In the past few years, we’ve discovered just how dangerous filtered reality can be. Whether we like it or not, Facebook, Google, YouTube and other mega-platforms are now responsible for how most of us interpret our world. These are for-profit organizations that really have no ethical obligation to attempt to provide a reasonable facsimile of reality. They have already outstripped the restraints of legislation and any type of ethical oversight.
Now, these same platforms can be used to distribute media specifically designed to falsify reality. Of course, I should also mention that in return for access to all this, we give up a startling amount of information about ourselves.
And that, according to UBC professor Taylor Owen, is deeply troubling: “It means thinking very differently about the bargain that platforms are offering us. For a decade the deal has been that users get free services, and platforms get virtually unlimited collection of data about all aspects of our life and the ability to shape the information we consume. The answer isn’t to disengage, as these tools are embedded in our society, but instead to think critically about this bargain.
“For example, is it worth having Facebook on your mobile phone in exchange for the immense tracking data about your digital and offline behaviour? Or is the free children’s content available on YouTube worth the data profile that is being built about your toddler, the horrific content that gets algorithmically placed into your child’s feed, and the ways in which A.I. are creating content for them and shaping what they view? Is the Amazon smart speaker in your living room worth providing Amazon access to everything you say in your home? For me, the answer is a resounding ‘no.'”
2018 could be an interesting year.