Image manipulation is nothing new.
In 1860, Abraham Lincoln posed for photographer Mathew Brady. Lincoln was rumored to be ugly, so Brady “focused excessive amounts of light on
Lincoln’s face… to distract from his ‘gangly’ frame. He had the future president curl up his fingers so… their remarkable length would go unnoticed… [He] even
‘artificially enlarged’ Lincoln’s collar so that his neck would look more proportional,” according to a post on Atlas Obscura.
In the 1930s, Joseph Stalin regularly had photos altered to smooth his
pockmarked face or to remove people who had fallen out of favor.
Photoshop came out in 1987, became a verb in the early 1990s, and became known for obvious fakery shortly thereafter.
advertisement
advertisement
The “BadPhotoshop” subreddit was born in 2010. In 2011, officers in Huili, Sichuan, China had to apologize for a hilariously Photoshopped picture of local officials inspecting a road. The artist James Fridman has become famous for intentionally misinterpreting Photoshop requests.
Photoshop had democratized altering
images poorly, but it still took extensive skills to alter images well. The likelihood of plausibly falsified images was low enough that, even as the technology got more sophisticated, the phrase
“pics or it didn’t happen” -- a request for photographic evidence to support an outlandish claim -- has been gaining traction for the past 20 years.
The first generative AI
images -- certainly the first ones involving people -- were worse than any BadPhotoshop post. Fingers especially came out as abominations. But in the past two years they’ve gotten to the point
where our ability to distinguish real images from AI-generated ones is
deteriorating.
Photoshop’s generative AI feature is now being used for insurance
fraud, adding nonexistent property to “before” pictures or fake damage to “after” ones.
This week, Chris Welch posted a thread reviewing the “Reimagine” feature on Google’s new Pixel 9 phone.
“With a simple prompt,”
Welch says, “you can add things to photos that were never there. And the company's Gemini AI makes it look astonishingly realistic. This all happens right from the phone's default photo editor
app. In about five seconds.”
He shares a bunch of examples: a quiet city street suddenly gains a car vs bike accident scene; a little girl has a new puppy appear on her couch; subway
tracks become inundated with floodwaters. “Can you find some kind of obvious AI giveaway in this image? Because I can't.”
He creates a Whole Foods with an added snake. A bus with
an added donkey. A pizza joint with an added alligator. The additions are astonishingly realistic: angles, perspective, lighting, even blur from depth of field all match the context.
Welch
puts bugs and garbage in his Panda Express order. “Mutilating food is extremely simple. Watch out, Yelp.” He gives his friend a drug problem, makes a helicopter crash-land, places bombs
around New York City. “(Dear law enforcement: I swear I was only doing this as a journalist trying to test the limits of Google's AI.)”
This technology will only become more
accessible and easier to use. “These are just examples I've made in the first few days of testing Google's Pixel 9 and Pixel 9 Pro. None of them took more than 10 or 15 minutes -- even the shots
that have multiple elements generated by AI. Now I want you to contemplate what someone will be able to do with Reimagine in an hour. Or a day. Patiently trying over and over again until the model
produces exactly what they've imagined. We're in for some truly wild, weird, creative, and unsettling stuff.”
Similar technology -- or better -- will soon be available on every
phone, not just the Pixel. We are at the point where you should approach every image you see -- certainly the shocking ones -- with skepticism. And even then, you’ll almost certainly be fooled.
Pictures and proof are no longer synonyms.