That expression, "Deepfake," pertains to artificial intelligence manipulation of video (as well as audio). Specifically, it’s a technique for “human image synthesis,” combining and superimposing existing images and videos onto other images or videos using a machine-learning technique.
Nice. Viewers can see a politician make a statement. But maybe that’s not really him or her, just AI-manipulated images. This is what a deepfake video is about — and it's finding a home on social media.
Here is the bottom line. CNN says: “Making a person appear to say or do something they did not has the potential to take the war of disinformation to a whole new level.”
Naturally, short-form political marketing can be a key area -- especially for bad actors. In turn, that would make it harder for voters to determine what's real and what's fake.
This isn’t widespread yet. But a key concern where any video is being distributed, especially on many, under-supervised digital media platforms where access is universal -- in particular Facebook, Instagram, Snapchat, Twitter, as well as user-generated video media sites like YouTube.
For TV news networks, reporting on this activity is complicated. How do they confirm or disprove such videos? How do they trace their origins?
Viewers also need to get smarter in a hurry. But will they have time to discern what’s what? Right now, total full-day media use has plateaued at around 10 hours/30 minutes a day, according to Nielsen. Now we need to add time for viewers to confirm what they are seeing and hearing.
Let’s go further: What happens with live video? Right now, AI-manipulated video content is focused on pre-recorded video -- not live video content. But one could imagine that in future.
Good news: Expect someone to figure out how to sell advertising around deepfake videos. You know, for fun. Of course, unsuspecting marketers caught via some self-serving, automated, non-transparent media schedule might feel differently.