Commentary

AI Poses New Challenges to Spotting Fake News

Earlier this month, The New York Times ran a piece called “Here Come the Fake Videos, Too,” in which a writer contacted a company specializing in AI-led “deep fake” video production—the highest quality and most difficult to spot—to try out the technology for himself.

His interest was sparked by a slew of recent incendiary videos that have emerged, some of which put the faces of national leaders on the bodies of people in compromising positions.

The article was in response to fears that Americans must not only worry about fake news articles—the piece outlines a fake BuzzFeed story that was doctored following the Parkland shooting in Florida — but also hyper-realistic videos, in which anyone can be made to say anything through the fast-developing technology.

advertisement

advertisement

Michael Fauscette, Chief Research Officer of G2 Crowd, a business software review platform, spoke to Publishing Insider about the implications of AI and how it raises issues of journalistic trust and also national security. 

While he admits it’s difficult to assess the national security risk such videos pose, Fauscette says: “The technology to build ‘deep fake’ videos has improved quite a bit over the past few months and will continue to improve, probably at an alarming pace. The videos are good enough today to fool ‘civilians,’ which means that the threat is real.”

He continues: “Could they be used to start a nuclear war? I hope we have enough controls in place that this scenario is a little farfetched. It is though, not impossible, especially as the videos improve in quality. It’s more likely in the short-term that the threat is much more on the level of individual coercion.

"As we know, coercion is a security risk, so from that perspective, serious situations could arise. The threat of personal embarrassment is powerful, especially if it is difficult or costly to prove that the story is fake.” 

However, the same processes that are used to create these videos can also be used to debunk them. Fauscette uses the example of the blood flowing in and out of a person’s face as a way fake videos can be spotted. The slight variations in color that result make the video’s inauthenticity clear.

“This is similar to one of the methods used to improve biometric fingerprint scanners, pulse detection,” he says. “There are also AI tools that are being used to spot fakes. But as soon as we make advancements to try and catch the fakes, the arms race heats up and the fakes get better. Intelligent security leads to intelligent hacking. The same is bound to happen with the AI-driven fake videos.”

Looking to the future, Fauscette sees risks in election manipulation by foreign governments at a much higher level than we’ve seen to date.

The videos produced for the NYT varied in quality, however, as Fauscette notes, given time, the technology will only become stronger. Luckily, teams of researchers are working on their own forms of AI to study it closely and develop digital antidotes.

Meantime, the old cliché “I’ll believe it when I see it” has officially run its course.

1 comment about "AI Poses New Challenges to Spotting Fake News".
Check to receive email when comments are posted.
  1. Chuck Lantz from 2007ac.com, 2017ac.com network, March 15, 2018 at 9:30 a.m.

    As usual, the problem isn't just recognizing faked videos, but also recognizing those who would use the existence of the video manipulation hardware to smoke-screen valid video evidence. ... "Fake news! ... Fake news!"

Next story loading loading..