Connect with us

News

How to deal with visual misinformation circulating in the Israel-Hamas war and other conflicts

Published

on

social media apps

Misinformation consists of false information that is not created or circulated with the intent to deceive. Disinformation consists of false information, including visual information, that is intended to deceive and do harm. (Pexels Photo)

In the three weeks since war began between Israel and Hamas, social media has been taken over with images and stories of attacks, many of which proved false.

For example, within hours of Hamas’ surprise attack on Oct. 7, 2023, screen grabs from a popular video game were shared by thousands of social media users as if depicting real scenes of violence against Israeli troops in Gaza. Five days later, a real explosion at a hospital in Gaza spurred further sharing of such spurious images to buttress various claims and counterclaims about responsibility for the casualties.

It’s not just this war. Over the past decade, international commissions and tribunals working to mediate conflicts in Syria, Myanmar, Ukraine and elsewhere have struggled to verify the large amount of digital evidence.

As a human rights scholar, I have, of late, been studying the ethics of viewing photos and videos of war and atrocities in situations where falsification of imagery is widespread. A principal lesson of this research is that users of social media have significant power to influence the content they receive and thus bear some responsibility when they consume and share false information.

Defining misinformation and disinformation

Scholars and policymakers distinguish misinformation from disinformation based on the intentions behind their creation and circulation. Misinformation consists of false information that is not created or circulated with the intent to deceive. Disinformation consists of false information, including visual information, that is intended to deceive and do harm.

At the start of any war, misinformation proliferates. Rumors that Ukrainian President Volodymyr Zelenskyy had fled Kyiv spread quickly after Russian forces invaded that country, only to be rebutted by videos posted from the streets of the capital. The difficulty of sifting reports on the ground, along with the reality that Zelenskyy was personally at risk, made many people accept and share those rumors.

Increasingly, however, false information about conflicts comes from actors – whether governments, military officials, separatist groups or private citizens – intentionally using texts and images to deceive. In Myanmar, for example, military propaganda officers published photographs supposedly depicting Rohingya people arriving in the country under British colonial rule in the mid-20th century. In actuality, these photographs, shared to support the military’s claim that the Rohingya had no right to live in Myanmar, depicted refugees from the 1994 Rwandan genocide.

Conditions for ethical responsibility

As social media becomes saturated with falsified images of mass violence in the Israel-Hamas war, the Ukraine war and other regions of the globe, individuals should ask what ethical responsibility they bear for their consumption of misinformation and disinformation.

Some might deny that users of digital media bear any such responsibility, since they are merely the passive recipients of content created by others.
Philosopher Gideon Rosen claims that when people are passive toward some occurrence, they generally don’t bear ethical responsibility for it. Anyone scrolling the internet will passively encounter hundreds of images and related texts, and it is tempting to assume they bear no responsibility for the images of war and mass violence that they see but only for how they respond to them.

However, users of digital media are not merely passive recipients of falsified images and stories. Instead, they have power to influence the kinds of images that show up on their screens. This means, in turn, that users bear some ethical responsibility for their consumption of visual misinformation and disinformation.

Algorithms and influence

Digital media platforms deliver content to users on the basis of complex decision-making procedures known as algorithms. Through both online and offline behaviors, users help determine what these algorithms deliver.

It is helpful to distinguish between influence and control. Having control over content would mean either encountering only images and stories that one consciously chooses or having the power to screen out any and all unwanted images. It is typical of digital communications, as philosopher Onora O’Neill has pointed out, that users lack the ability to control content in these ways.

Nevertheless, users can significantly influence the material they encounter in digital spaces. The algorithms by which social media platforms and other digital networks deliver content to users are not fully transparent, but neither are they wholly mysterious. In most cases, they are propelled by users’ past engagement with a platform’s content – a fact reflected in the very name of the “For You” page on TikTok.

Liking, tagging, commenting on or merely continuing to watch images of war and atrocities tends to lead to additional encounters with such content. The potential risks of this algorithmic process became apparent in the mid-2010s, when YouTube’s algorithm was found to be leading users into progressively more extreme videos related to jihadist violence.

Although major social media platforms have community guidelines prohibiting incitement to violence and sharing of graphic content, those prohibitions are difficult to enforce. In the context of some ongoing wars, they have even been relaxed – with Facebook temporarily allowing posts calling for violence against Russian troops and paramilitary groups occupying parts of Ukraine, for example. Taken together, these processes and policies have opened the door to substantial misinformation and disinformation about armed conflict.

Hiding, reporting or simply disengaging with violent content, by contrast, tends to lead to fewer such messages coming in. It may also reduce the odds that such content will reach others. If one knows that a Facebook friend or TikTok content creator has shared false information before, it is possible to block that friend or unfollow that creator.

Because users have these means of influencing the images they receive, it is reasonable to assign them some responsibility for algorithmically generated misinformation and disinformation.

Verifying images

Altering patterns of engagement with digital content can decrease users’ exposure to misinformation in wartime. But how can users verify the images they do receive before directing others to them?

One simple protocol, promoted by educators and public health groups, is known by the acronym SIFT: stop, investigate, find, trace. The four stages of this protocol ask users to stop, investigate the source of a message, find better coverage, and trace quotes and claims back to their original contexts.

Images, like quotes, can often be traced to their original contexts. Google makes available its reverse image search tool, which allows users to select an image – or parts of it – and find where else it appears online. I found this tool helpful during the first months of the COVID pandemic, when Holocaust photographs were circulated online in posts comparing mask mandates to deportation trains. Of course, as journalists and forensic researchers are quick to point out, such tools can only be applied to a small portion of the images we encounter in our daily lives.

No technique or protocol will give users absolute control of the images they see in wartime or provide complete assurance against sharing false information. But by understanding users’ power to influence content, it may be possible to mitigate these risks and promote a more truthful future.The Conversation

Paul Morrow, Human Rights Fellow, University of Dayton

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Maria in Vancouver

Lifestyle2 weeks ago

Nobody Wants This…IRL (In Real Life)

Just like everyone else who’s binged on Netflix series, “Nobody Wants This” — a romcom about a newly single rabbi...

Lifestyle3 weeks ago

Family Estrangement: Why It’s Okay

Family estrangement is the absence of a previously long-standing relationship between family members via emotional or physical distancing to the...

Lifestyle2 months ago

Becoming Your Best Version

By Matter Laurel-Zalko As a woman, I’m constantly evolving. I’m constantly changing towards my better version each year. Actually, I’m...

Lifestyle2 months ago

The True Power of Manifestation

I truly believe in the power of our imagination and that what we believe in our lives is an actual...

Maria in Vancouver3 months ago

DECORATE YOUR HOME 101

By Matte Laurel-Zalko Our home interiors are an insight into our brains and our hearts. It is our own collaboration...

Maria in Vancouver4 months ago

Guide to Planning a Wedding in 2 Months

By Matte Laurel-Zalko Are you recently engaged and find yourself in a bit of a pickle because you and your...

Maria in Vancouver4 months ago

Staying Cool and Stylish this Summer

By Matte Laurel-Zalko I couldn’t agree more when the great late Ella Fitzgerald sang “Summertime and the livin’ is easy.”...

Maria in Vancouver5 months ago

Ageing Gratefully and Joyfully

My 56th trip around the sun is just around the corner! Whew. Wow. Admittedly, I used to be afraid of...

Maria in Vancouver6 months ago

My Love Affair With Pearls

On March 18, 2023, my article, The Power of Pearls was published. In that article, I wrote about the history...

Maria in Vancouver6 months ago

7 Creative Ways to Propose!

Sometime in April 2022, my significant other gave me a heads up: he will be proposing to me on May...