
Right now, if you scroll through social media looking for
updates about the war involving Iran, the United States, and Israel, you will see an extraordinary amount of war footage.
Missiles streak across night skies. Drone footage shows explosions
blooming across cities. Videos claim to show missile strikes in Tel Aviv or massive explosions in Gulf cities. The clips look and sound real, and they spread with astonishing speed.
But a
growing number of them never happened at all.
Investigators and journalists tracking the information about the current conflict have documented waves of AI-generated videos, fabricated
satellite images, and manipulated footage circulating online.
Bellingcat and the Atlantic Council’s Digital Forensic Research Lab have flagged multiple viral clips — some
accumulating tens of millions of views — as synthetic or recycled from entirely different conflicts.
advertisement
advertisement
What is striking is not simply that these images exist. It’s that the tools
required to create them are now widely available.
What once required professional visual-effects teams can now be done in minutes with consumer AI tools. Video generators, image editors, and
automated posting pipelines mean that convincing war footage can be created and distributed at scale.
Because social-media platforms reward engagement, the incentive to create images is
powerful. Viral posts generate attention, followers, and in some cases direct payments through creator monetization programs.
Fake war footage has become a business model.
In my
upcoming book “The Future of Truth,” I explore how the information ecosystem reached this point.
One of the most revealing examples comes from an unlikely source: a
satirical conspiracy theory called Birds Aren’t Real.
When I first came across the movement while researching the book, it felt like a piece of internet absurdism — funny and
obviously ridiculous. But the deeper I looked, the clearer it became that the joke was actually a live experiment in how belief travels on the modern internet.
The movement was created by
Peter McIndoe, a college student who wanted to parody the explosion of internet conspiracy culture. The premise was deliberately absurd. According to the story, the U.S. government had secretly
exterminated birds decades ago and replaced them with surveillance drones designed to monitor the population.
Birds, in this narrative, weren’t birds at all. They were government
robots.
The idea was intentionally ridiculous, a piece of performance art aimed at exposing how conspiracy theories spread online. McIndoe staged rallies, printed merchandise, and repeated the
slogan everywhere he could: Birds aren’t real.
But once the idea entered the algorithmic bloodstream of social media, something interesting happened. The joke spread. Thousands of
people began repeating the phrase. Some did it ironically, while others appeared to take it seriously.
The project became a strange cultural experiment showing how easily narratives can travel
in an engagement-driven media environment.
In my book, I describe the deeper lesson this way: “In a networked world, ideas don’t spread because they are true. They
spread because they are interesting enough to be repeated.”
The Birds Aren’t Real movement was satire. But the system that allowed it to spread—the engagement-driven
algorithms of social media—is the same system now shaping how millions of people experience the war involving Iran today.
When the conflict intensified, people immediately turned to
social media to understand what was happening..
But the same platforms that distribute authentic documentation also distribute simulations, manipulated images, satire, propaganda, and
AI-generated fiction. To the viewer scrolling quickly through a feed, these things often look identical.
A fabricated satellite image may appear to show damage at a naval base. A video
generated by artificial intelligence may depict missiles striking a city skyline. A recycled clip from a different conflict may be relabeled as breaking news.
All of it moves through the same
algorithmic bloodstream, and because engagement drives visibility, the most dramatic images often travel the furthest.
The result is not that the war itself is fictional. The destruction is
real, the casualties are real, and families across the region are experiencing the consequences of violence and instability.
But the way the public experiences the war is increasingly mediated
by a digital environment where images can be generated, manipulated, and amplified faster than journalists or investigators can verify them.
The battlefield exists in physical space. But the
war most of us experience now lives inside the feed.
This is why the Birds Aren’t Real story matters. The satire worked because it exposed a vulnerability in the information
system.
It showed that once algorithms begin amplifying a narrative, the distinction between joke, belief, propaganda and entertainment becomes harder to see.
The creators of the
movement understood that they were not really talking about birds. They were talking about the fragility of belief in a media ecosystem optimized for attention.
Today that fragility is
colliding with a new technological reality.
Artificial intelligence can now generate convincing images of events that never occurred. Those images can spread instantly through networks
designed to reward virality rather than accuracy. Automated recommendation systems ensure that dramatic content travels further and faster than careful reporting.
As a result, the visual
record of reality itself becomes unstable.
In my book, I argue that we are entering a moment when truth will increasingly compete with synthetic alternatives.
As I write, “AI
doesn’t just distribute information. It can generate an infinite supply of believable alternatives to reality.”
That distinction matters because it changes the nature of the
threat. Previous information crises were about distortion — taking real events and spinning them. What is emerging now is something different: the wholesale manufacture of plausible events that
never happened at all.
When that happens, people begin responding in two different ways. Some start believing everything they see, absorbing each dramatic image as confirmation of what they
already fear. Others stop believing anything at all, developing a reflexive skepticism that leads them to dismiss genuine documentation alongside the fake.
Both responses corrode the shared
evidentiary ground that democratic societies depend on. You cannot debate what to do about a war if you cannot agree on what the war looks like.
Which is why a phrase that sounds absurd at
first glance begins to capture something about our moment: War isn’t real.
Not because the conflict itself is imaginary, but because the version of the war most of the world sees is
assembled inside a digital ecosystem optimized for engagement rather than accuracy. In that world, reality itself has to compete for attention.