When Real Footage Gets Called Fake: The Israeli Strike on a British Journalist That Proved Disturbingly Authentic
A Very Real Explosion in a Very Fake World
Here is the absurd reality of covering conflict in 2026: a British journalist nearly gets blown up on camera in southern Lebanon, and the first question millions of people ask is not "Is he alright?" but "Is that even real?"
On 19 March 2026, RT correspondent Steve Sweeney and cameraman Ali Rida Sbeity were reporting near the Al-Qasmiya Bridge, north of Tyre in southern Lebanon, when an Israeli airstrike landed mere metres from their position. The footage is harrowing. It is also, as the BBC has taken pains to verify, entirely genuine.
The fact that a major news organisation felt compelled to lead with "not AI-generated" in its headline tells you everything about the information landscape we are now navigating.
What Actually Happened
Both journalists sustained shrapnel injuries from the strike and were hospitalised. Sbeity later confirmed in a separate video that both were recovering. The Committee to Protect Journalists (CPJ) condemned the incident, with regional director Sara Qudah calling it a violation of international law.
The IDF's response was characteristically measured in tone if not in ordnance. It stated that "an explicit warning had been issued regarding this area" and that the crossing "was struck after sufficient time had passed since warnings." The IDF maintains the bridge was a legitimate target used by Hezbollah for weapons transfers.
It is worth noting that Sweeney works for RT, Russian state-funded media, whose editor-in-chief Margarita Simonyan characterised the strike as a deliberate attack on press. Whether the journalists were specifically targeted remains unverified by independent sources, though the proximity of the strike to clearly identified press personnel raises serious questions.
The AI Misinformation Crisis Eating This Conflict Alive
The reason the BBC framed its headline around authenticity is not editorial quirk. It is necessity. Since the US-Israel-Iran conflict began on 28 February 2026, AI-generated misinformation has reached genuinely unprecedented levels.
The numbers are staggering. BBC Verify identified over 110 unique AI-generated posts in just the first two weeks of fighting. One fabricated video purporting to show missiles hitting Tel Aviv appeared in more than 300 posts and racked up tens of millions of views.
But here is where it gets properly bizarre. The problem is not just fake content being believed. It is real content being dismissed as fake. X's Grok chatbot has been caught doing both: confirming AI-generated footage as authentic and branding genuine videos as deepfakes. When an authentic video of Benjamin Netanyahu in a cafe circulated, Grok confidently declared it AI-generated. Verification expert Tal Hagin has stated bluntly that AI detectors "cannot be trusted."
The Atlantic Council's DFRLab found over 300 contradictory Grok responses to a single video. That is not a tool. That is a coin flip with extra steps.
The "Liar's Dividend" in Full Effect
Security researchers have long warned about the "liar's dividend" of deepfake technology: once people know fakes exist, everything becomes deniable. We are watching that theory play out in real time during a shooting war, and it is deeply uncomfortable.
X announced 90-day monetisation suspensions for unlabelled AI conflict content on 4 March. The UAE went further, arresting 35 people for sharing AI-generated war videos. These are sticking plasters on a severed artery.
When a journalist gets hit by shrapnel on camera and the public's default reaction is scepticism, something fundamental has broken in how we process reality. BBC Verify's Shayan Sardarizadeh has suggested this conflict may hold the record for the most viral AI-generated videos of any war. Ever.
Why This Matters Beyond the Headlines
The strike near Sweeney came just a day after Al-Manar programming director Mohamed Sherri and his wife were killed in a separate Israeli strike on central Beirut. Press safety in this conflict is deteriorating rapidly, and the fog of AI-generated nonsense makes accountability even harder to establish.
We have entered a period where proving something happened requires almost as much effort as the reporting itself. That should worry everyone, regardless of where you sit on any geopolitical fence.
The footage of Sweeney's near miss is real. The injuries were real. The only thing artificial about this story is the world's increasingly broken ability to tell the difference.
Read the original article at source.
No comments yet. Be the first to share your thoughts.