The emergence of artificial intelligence technology has given any individual on the internet the power to produce photorealistic conflict images. The extent to which this is affecting the public and our shared sense of truth is only beginning to be understood.
Feature Analysis: March 2025.
Just one day after Hamas’s attack on southern Israel, on October 8, 2023, social media feeds were flooded with compelling photographs of burning skylines, destroyed buildings, and bloodied civilians. Some were genuine, but many were not. Photographs from Syria, Iraq, and even video games, as well as those made using artificial intelligence technology alone, were identified by AFP and BBC Verify fact-checkers as having been shared by millions of people. [1] [2]
This is not a tale of a few malicious actors. This is a tale of a shift in the way we view conflict and how that is being exploited.
The machine that brings war to life and makes it look like anything
Diffusion models are an artificial intelligence technology that enables tools such as Midjourney and Stable Diffusion to produce photorealistic depictions of events that never occurred in a matter of seconds using just a plain language prompt at no cost [3]. The barrier to creating falsehoods has not only been lowered but made virtually non-existent. What used to require governments and technology to propagate falsehoods now only requires a tab on a browser. [4]
The imagery created by such tools takes advantage of a basic aspect of the way the human mind processes visual information: the trust we tend to place in photographs over text. When a text claims a hospital was bombed, it is immediately suspect. When the same hospital is depicted in a photograph, the claim feels undeniable.
The willing participant
It would be nice to point a finger at a specific state actor, a troll farm, or even a specific propagandist. While there is certainly culpability on their part, the field of communication studies has a specific term to describe the phenomenon driving the vast majority of the problem: Participatory Misinformation. It’s the phenomenon where users willingly participate in the propagation of information they believe to be genuine because it confirms their existing belief about what they think is true [5]. The Israeli-Palestinian conflict is a deeply emotional issue, with a history as long as it is contentious, and a moral dimension felt by every community on the globe. It’s the exact kind of issue where emotional resonance trumps the desire to verify the facts. [6]
The algorithms driving the platforms we use to consume information are built to maximize engagement. Engagement tends to come from the strongest emotional reaction. A fabricated photograph of a humanitarian crisis or a dramatic act of resistance will, by the design of the algorithm, travel farther than a well-researched correction. It’s a form of affective polarization where the two groups aren’t just arguing over the facts, they’re living in different realities, each with its own curated feed of fabricated evidence to support their side. [7]
Professional journalism has adapted to this as well. Organizations such as Bellingcat, AFP Fact Check, and the BBC’s disinformation team have developed advanced verification processes that use reverse image searches, metadata analysis, geospatial analysis, and emerging AI detection technology [8]. They are doing vital work and are fundamentally behind the curve. Social media moves in real time, while verification takes hours. This is not a technological problem; it is an architectural one. [9]
What can actually be done
Technical countermeasures such as watermarking and metadata standards such as the Coalition for Content Provenance and Authenticity (C2PA) are important but insufficient [10]. They are a reaction to identification and do not address the underlying behavior of sharing. The real answer is to develop a population’s media literacy skills: to teach people to pause before sharing and to question who made the image and why, and to treat visual depictions of war as claims to be examined rather than accepted.[11]
The Middle East conflict is not the first and won’t be the last war in which synthetic media is used as a weapon of perception. Every war to come will be preceded by this phenomenon. Countries that do not develop the critical infrastructure to deal with it from a technological, journalistic, and educational perspective will find that the wars over territory are no longer the most important ones but the ones over what people perceive as true or false.
Edited by Oriane Beveraggi.
References
[1] AFP/CEDMO. (2023, November 29). Image of child trapped under rubble predates Gaza war and shows signs of AI. CEDMO Hub. https://cedmohub.eu/image-of-child-trapped-under-rubble-predates-gaza-war-and-shows-signs-of-ai/
[2] Sardarizadeh, S. (2023, October 13). BBC expert on debunking Israel-Hamas war visuals: “The volume of misinformation on Twitter was beyond anything I’ve ever seen.” Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/news/bbc-expert-debunking-israel-hamas-war-visuals-volume-misinformation-twitter-was-beyond
[3] Scale AI. (n.d.). Diffusion Models: A Practical Guide. https://scale.com/guides/diffusion-models-guide
[4] Paris, B., & Donovan, J. (2019). Deepfakes and Cheap Fakes. Data & Society Research Institute. https://datasociety.net/library/deepfakes-and-cheap-fakes/
[5] Lühring, J., et al. (2024). Emotional resonance and participatory misinformation. HKS Misinformation Review. https://misinforeview.hks.harvard.edu/article/emotional-resonance-and-participatory-misinformation-learning-from-a-k-pop-controversy/
[6] Soufan Center. (2023, October 26). IntelBrief: AI-Powered Disinformation in the Israel-Hamas War and Beyond. https://thesoufancenter.org/intelbrief-2023-october-26/
[7] Guo, L., et al. (2025). “Engagement, User Satisfaction, and the Amplification of Divisive Content on Social Media.” PNAS Nexus. https://doi.org/10.1093/pnasnexus/pgaf062
[8] TechBuzz AI. (2026, March 4). How a Newsrooms Battle AI-Generated Misinformation. https://www.techbuzz.ai/articles/how-newsrooms-battle-ai-generated-misinformation
[9] Mahadevan, A. (2023). As misinformation surges during the Israel-Hamas war, where is AI? Poynter. https://www.poynter.org/fact-checking/2023/israel-hamas-war-artificial-intelligence-misinformation-fake-images/
[10] C2PA / Content Authenticity Initiative. (n.d.). How it works. https://contentauthenticity.org/how-it-works
[11] OpenAI Help Center. (n.d.). C2PA in ChatGPT Images. https://help.openai.com/en/articles/8912793-c2pa-in-chatgpt-images
Cover picture: https://unsplash.com/fr/photos/un-grand-ecran-avec-une-carte-du-monde-dessus-ZXysY_49jDM



Leave a comment