Propaganda has always been a weapon of war, but the digital revolution increases its reach, immediacy and effectiveness and makes it a more potent tool. This makes it harder and harder for the average person, as well as professionals with expertise, to work out what is true and what isn’t.
To understand this information war, we need to understand where and how arguments and ideologies are promoted and developed online.
In some instances, online propaganda simply involves the framing of real events, violent images and videos, and hate speech to emphasise the guilt of one side and vindicate the other.
But much material relies on the creation of what’s commonly referred to as fake news. This often takes the form of fabricated stories published on social media that repurpose or mislabel real photos or videos.
For example, one post on X (formerly Twitter) that was viewed 300,000 times used a photo of an accidental fire at a McDonald’s restaurant in New Zealand to falsely claim the company had been attacked by pro-Palestinian protestors for its perceived support of Israel. Despite being debunked, the story was still the focus of heated discussions on social media channels.
There are also reports of excerpts from video games and old TikToks being shared with claims they are from real current events in Gaza, and fake government agency social media accounts posting disinformation.
Advances in AI are also playing a role. Experts in digital forensics have shown how AI-faked photographs of bloodied babies and abandoned children in Gaza were being widely used in November 2023. These were being published at the same time as the media was trying to investigate allegations that babies had been beheaded in the Hamas attack of October 7.
Deep fake videos have been used in the Gaza conflict, to show prominent figures in the Middle East saying things they never said, and it is reasonable to think they don’t believe. Edited battlefield footage from Ukraine and modified footage from high-end military computer games have also been used as deepfaked “Gazan footage”, with the Associated Press keeping an extensive archive of examples.
Based on what we know about misinformation on other subjects, it’s likely that much of this online propaganda about Gaza isn’t being generated by individual supporters posting randomly on social media. Misinformation contractors now make their services available on the dark web (an encrypted part of the web that makes it very difficult to identify users) to people looking to mount widespread campaigns.
Inside the dark web, those developing mis- and disinformation can use techniques that are used by legitimate marketing companies in the outside world. They can experiment with messages, and test the responses they receive to them. On dark web forums, groups of activists can collaborate on messaging, imagery, timing and targeting to best effect.
Another origin of much misinformation is “troll farms”, which are staffed by government agents or their proxies in China, North Korea and Russia, among other countries. These are groups who identify the messages they think will change attitudes and amplify them through coordinating social media campaigns.
They are increasingly using AI-driven bots programmed to spread particular narratives or key words or phrases. “Viral” bots magnify the reach of their content by getting networks of other bots to repost it, which in turn encourages search engine and social media algorithms that favour popular and provocative posts to give it greater prominence.
The dark web origins of misinformation makes it much harder for governments to track and stop the people creating it, as does the use of encrypted messaging services such as WhatsApp and Telegram to share content. By the time the authorities have identified a piece of misinformation it may have been seen by many thousands of people across multiple channels.
The traditional media is also struggling to sift through and counter the weight of misinformation about Gaza, which appears in social media much faster than journalists can verify or debunk it. And the death of so many journalists in Gaza is making accurate news harder to gather.
Media outlets are often accused of bias in both directions. So when traditional news is seen as inadequate or hard to come by, people are more likely to turn social media and its flood of dark web-created misinformation.
The information war in Gaza is a war of values, and a war of behaviours, of establishing who is “them” and who are “us”. The war in Ukraine is exactly the same. The danger is that in shaping the view of the public, the information war could have an impact on governments and on the battlefield.