The first deepfakes started emerging at the beginning of 2018. Naturally, because, well, the Internet, the breeding ground was porn: namely, videos with the faces of Gal Gadot, Scarlett Johansson and Taylor Swift superimposed onto the bodies of porn stars.
The videos were shared to Reddit by a user called deepfakes (hence the name of the genre). The website banned them by February.
In April that year, Buzzfeed made a deepfake of Obama, voiced by Jordan Peele. “We’re entering an era in which our enemies can make it look like anyone is saying anything,” said Obamapeele. “This is a dangerous time. Moving forward, we need to be more vigilant with what we trust from the Internet.”
But deepfakes did not overwhelm the Web in 2018.
Over a seven-month period in 2019, the number of deepfake videos online almost doubled to 14,678, per then-startup Deeptrace Labs.
Doubling is scary. But 14,678 is still a tiny number in Internet terms. So, no, deepfakes did not overwhelm the Web in 2019.
Writing in MIT Technology Review, Karen Hao and Will Douglas Heaven decided that 2020 was the year that deepfakes went mainstream, citing their use in whistleblower shielding, an alternative history of the 1969 Apollo moon landing, sports ads and political campaigns.
And yet deepfakes did not overwhelm the Web in 2020.
By the following year, VFX artist Chris Umé was making uncannily accurate deepfakes of Tom Cruise.
Even so, deepfakes did not overwhelm the Web in 2021.
In 2022, Umé and his business partner Tom Graham took it a step further, delivering a live deepfake performance on “America’s Got Talent,” where Daniel Emmet was transformed into a young Simon Cowell to perform “You’re The Inspiration.”
But deepfakes did not overwhelm the Web in 2022. In fact, they were still so niche that the ability to create one got Umé and Graham four yeses from the “AGT” judges. Sofia Vergara gushed, “I cannot even imagine the amount of work to be able to create something so perfect.”
Early in the deepfake era, at the end of 2019, Oscar Schwartz wrote in The Guardian that deepfakes are “where truth goes to die.” Around the same time, in a piece called “Jim Acosta, Deepfakes -- And The Death Of Truth,” I warned that the fakes were going to spread: “It’s a cat-and-mouse game that will only get worse. And the advantage is fully with the fakes. You don’t need a perfect fake video to spread a rumor, sow distrust, feed people’s fears and biases, or undermine attempts at common ground.”
But here we are, 2023, and deepfakes have not overwhelmed the Web. Gee, Kaila, that was a shitty prediction.
Maybe, maybe not. Maybe it was just an early prediction.
Because here’s where we’re at now: a world in which you can use text to generate any kind of still image you like, instantly, with tools like Dall-E, Stable Diffusion and MidJourney.
A world in which you can generate personalized speech from a three-second sample of someone’s voice.
A world in which you will soon be able to create video from a natural-language text prompt.
Umé and Graham got four yeses because what they did was hard. But it’s about to be easy and instant for anyone, no artistic or technical skill necessary.
And when that happens, whether it’s this year or next or the following, the volume of fakery will overwhelm the Web.
And when that happens, you tell me: Will truth be alive or dead?