Missiles destroying Tel Aviv airport, Ayatollah Ali Khamenei crushed with his head among the rubble, and an American truck with soldiers entering Tehran. In recent weeks, what is credible has begun to overtake what is real, as hundreds of photos and videos created with artificial intelligence (AI) have surged on social media, both in Iran and in Israel and the United States, and indeed the rest of the world. They are no longer just photos, but complex videos that have added another layer of chaos and confusion to the online conflict.

The New York Times identified more than 110 unique AI-generated images and videos about the war in the Middle East over the past two weeks. The forgeries covered all aspects of the fighting: they falsely showed Israelis screaming as explosions ravaged Tel Aviv, Iranians mourning their dead, and American ships being bombed with missiles and torpedoes. Collectively, these videos were viewed millions of times online across platforms like X, TikTok, and Facebook, and countless more times on popular private messaging apps in the region and around the world.

To identify these videos and recognize them as fakes, the New York newspaper had to analyze not only the "obvious" signs, such as images of non-existent buildings or incoherent text in the footage, but also "invisible" watermarks and the metadata of the videos. Additionally, the posts were checked by multiple detection tools. A new and sophisticated generation of AI tools makes it possible to create forgeries, allowing virtually anyone to create war simulations so realistic that they can deceive the naked eye, at almost no cost to the creator.

Similar content has already spread in other conflicts, such as the one between Ukraine and Russia. However, this war has multiple fronts, leading to a proliferation of false content since the United States and Israel first attacked Iran, according to experts. “Even compared to the start of the Ukraine war, the current situation is very different,” said Marc Owen Jones, an associate professor of media analysis at Northwestern University in Qatar.

“We are probably seeing much more AI-related content now than ever before,” he warns. The content has become a powerful…

“The use of AI images of places in the Gulf that are burned or damaged is becoming increasingly important in Iran's strategy,” Jones stated, “because it allows them to give the impression that this war is more destructive and perhaps more costly for U. S. allies than it actually is.

” In one of the most widely circulated fake videos online, an unstable scene, filmed with a handheld camera, apparently from a balcony in Tel Aviv, shows the skyline bombarded with missiles while an Israeli flag waves in the foreground. According to an analysis of social media activity conducted by The Times, the video was viewed millions of times across various platforms and was shared by social media influencers and fringe news websites. On the other hand, the alleged image of Ali Khamenei among the rubble also spread rapidly.

The image went viral on X, TikTok, and Telegram on February 28. It shows an elderly man with his face covered in dust and debris, presented as the supreme leader of the Islamic Republic, found dead under the ruins of his residence. Within hours, it reached tens of millions of views worldwide and was even shared by journalists.

After a more detailed analysis, several visual clues revealed the manipulation: unnatural facial proportions, blurry edges in the rubble, and an oddly uniform skin tone. Most importantly, the photograph cannot be attributed to any reliable source: neither major international news agencies nor Iranian state media had circulated it; it simply circulated on social media. For BBC Verify journalist Shayan Sardarizadeh, an expert in debunking videos and images, “what has changed in the past year is that generative AI has become much more accessible.

” “It is now possible to create very credible videos and images that appear to show a significant war incident that is difficult to detect for the untrained or casual eye,” he believes. Meanwhile, Professor Hany Farid, specializing in digital forensics at the University of California, Berkeley, points out: “Ten years ago there were one or two fake things out there; they were quickly debunked… Now there are hundreds of them, and they are really realistic. ” In his opinion, as media and social networks become polarized and divided, another factor worsens the situation.

For some time now, platforms have abandoned content moderation programs. “The content is more realistic, the volume is greater, the penetration is deeper: this is our new reality. And it is really chaotic,” Farid indicated.