A Fake War Got 70 Million Views. The Real One's Hard Enough to Follow.
AI-generated battle footage, video game clips shared as combat, and doctored satellite images are flooding social media during the Iran conflict. A Texas governor fell for it. So did millions of others.
A video of an Iranian missile chasing down and destroying a US fighter jet racked up 70 million views on X last weekend.
It wasn't real.
BBC Verify traced the clip to a video game. The jet, the missile trail, the explosion — all rendered in a military simulator. Seventy million people watched it anyway.
This is what war looks like in 2026. Not just bombs and diplomacy. A second front, fought entirely in pixels, where anyone with a laptop can manufacture a battle scene and broadcast it to millions before a single fact-checker wakes up.
The Flood
Within minutes of President Trump announcing "Operation Epic Fury" on Saturday, fabricated footage started pouring across X, Instagram, and Facebook. WIRED reviewed hundreds of posts in the first 24 hours alone. The pattern was consistent: old footage relabeled, AI-generated imagery, and video game clips passed off as live combat.
Some highlights from the misinformation greatest hits:
A clip claiming to show ballistic missiles over Dubai had 4.4 million views. It was actually footage of Iranian missiles fired at Tel Aviv in October 2024.
A video of explosions captioned "6 Iranian Hypersonic Missiles hit Israeli Haifa port" got 64,000 views. The footage was from an Israeli attack on Damascus last July.
Texas Governor Greg Abbott reposted a video of a ship being destroyed, writing "Bye bye." It was from War Thunder, a WWII-era combat simulator. The US Navy doesn't even have battleships in service. Community notes corrected the post, but not before it went viral.
Iran's state-affiliated Tehran Times shared a satellite image showing supposed damage to a US radar system in Qatar. Financial Times analysis confirmed the image was AI-altered — taken from Bahrain, digitally modified, and presented as proof of a successful strike. It got nearly a million views and stayed up for two days.
Follow the Money
Here's the thing most people miss about war misinformation in 2026: there's a business model behind it.
X's creator revenue sharing program pays users based on engagement. More views, more money. Some creators earn hundreds of dollars monthly by posting content that drives clicks. During a war, nothing drives clicks like dramatic combat footage — real or not.
X noticed. On March 3, head of product Nikita Bier announced new rules: post AI-generated war footage without labeling it, and you'll lose revenue sharing for 90 days. Do it again, permanent ban.
It's a start. But the policy only covers AI-generated content, not misinformation in general. Old footage relabeled as new? Video game clips? Doctored satellite images? Those fall through the cracks.
And X isn't the only problem. Meta's platforms carried the same fakes. The 70-million-view missile video spread across Instagram and Facebook too.
The Coordination
Some of this isn't random attention-seekers chasing ad revenue. It's organized.
X revealed it dismantled a network of 31 coordinated accounts operating from Pakistan. All were hacked accounts with usernames changed on February 27 — the day before the strikes — to variations of "Iran War Monitor." They pumped out AI-generated footage designed to look like citizen journalism.
One account, under the name "Ahmed Hamdan" claiming to be a journalist from Gaza, posted a deepfake of an Iranian rocket hitting a ship in Tel Aviv. The account, the name, and the video were all fabricated.
State actors are in the mix too. Both sides have incentives to manipulate what the world sees. Pro-Iranian accounts exaggerated strike damage against Israel. Pro-Israeli accounts shared old Iranian protest footage and claimed it showed current anti-regime uprisings.
Why It's Harder Than You Think
Here's the uncomfortable part. Spotting fakes used to be about counting fingers on AI-generated hands. Those days are gone.
The Verge interviewed verification experts from the New York Times, Bellingcat, and Indicator about how they separate real from fake. Their process involves cross-referencing metadata, checking geolocation against satellite imagery, running Google's SynthID watermark detector, and consulting with analysts who know the specific terrain of conflict zones.
That's hours of work per image. The fakes take minutes to create.
Full Fact, the UK factchecking organization, put it bluntly: "Even when AI images seem low quality, or still have a visible watermark on them, we often see them shared at scale."
Sam Stockwell, a researcher at the UK's Centre for Emerging Technology and Security, flagged a new twist: people are feeding suspicious videos to AI chatbots and asking them to verify whether the footage is real. The chatbots get it wrong. Then people screenshot the chatbot's "confirmation" and share that as evidence.
AI generating the fakes. AI "verifying" the fakes. A closed loop where truth has no entry point.
The Real Cost
None of this is abstract. When 70 million people watch a fake missile strike, it shapes how they understand the conflict. It affects which side they sympathize with, which policies they support, which leaders they trust.
During the June 2025 Israel-Iran conflict, the same pattern played out — AI videos of Iranian military capabilities and Israeli damage circulated widely. BBC documented the phenomenon then. Nothing changed. The tools got cheaper, the output got better, and here we are again, six months later, with the same problem at twice the scale.
The platforms know. The fact-checkers know. The experts know. And still, a video game clip ends up on a governor's feed presented as military triumph.
What You Can Do
The verification experts offer a few practical steps. Check the source account's history — was it created recently or did it suddenly change names? Reverse-image search any dramatic photo before sharing it. Be especially suspicious of "perfect" footage that shows exactly what one side wants you to see.
But the bigger lesson is simpler: in 2026, seeing isn't believing. Especially during a war.
The gap between what happened and what people think happened has never been wider. Every conflict now comes with a shadow conflict — one fought in group chats, algorithm feeds, and recommendation engines. The weapons are cheap. The damage is real.
And 70 million views later, most people still don't know the missile was from a video game.
Sources for this article are being documented. Albis is building transparent source tracking for every story.
Keep Reading
You Can't Tell What's Real Anymore. And That's the Point.
The Iran-Israel war is the first conflict where deepfakes flood feeds faster than fact-checkers can debunk them. Here's how the trust infrastructure of war reporting just broke.
AI Trained for Peace Was Used in War—Hours After Being Banned
Claude was built with safeguards against military use. Friday morning, Trump banned it. Friday night, the Pentagon deployed it in Iran. The gap between ethics and deployment just closed to zero.
Proving a Photo Is Real Is Now Harder Than Faking One
Samsung, Google, and Spotify are building receipts for reality. Here's why proving something is human-made just became the internet's hardest problem.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.