Tuesday, April 16, 2024
- Advertisement -
- Advertisement -

A new approach for digital media, peace, and conflict

Discussions about the negative effects of online communication on society, including its potential to contribute to violent conflict, tend to focus primarily on misinformation and disinformation. The former refers to factually incorrect information that manages to reach audiences at scale, whereas the latter refers to inaccurate information that is spread deliberately and malignantly by some actor or agent in order to produce specific perceptions and outcomes in physical or digital space.

There is also an enduring perception in many quarters that the internet is an inherently liberating and self-organizing medium, one that is separate and distinct from the “real world.” In this narrative, misinformation and disinformation are the bad parts of the good internet. This is a holdover from the early days of the internet, when discussions tended to emphasize the internet’s potential for educating and uniting people.

This is an outdated and misleading view of what the internet is today.

For most people today, the internet is not the democratizing force that some hoped it would be during events like the Arab Spring. Rather, most people experience the internet as a rigid, highly organized, and closely monitored medium of expression and connection dominated by corporate tech giants and, perhaps somewhat counterintuitively, by state actors.

Much of the communication we talk about today as happening on “the internet” (which technically refers to nothing more than a specific protocol for exchanging data between networked computers) actually occurs via a relatively small number of digital platforms (e.g., Discord, TikTok, WhatsApp, Telegram, Instagram, Twitter, and Facebook).

All of which are governed by algorithms designed to prioritize certain content, shape social interactions, and gather data in ways that maximize their commercial potential—a model sharply at odds with most understandings of “community.” These platforms are also increasingly similar to each other, since in order to compete, they must each adopt the most effective features of the others, as demonstrated through You tube’s “shorts” feature, which is a close reproduction of Tik-Tok.

Meanwhile, these platforms have grown more pervasive and woven into the rhythms of everyday life, leading to a progressive collapse in the distinctions people draw between different sources of media and information—for many, the lines between “online” and “offline” are getting blurrier.

For example, to many younger people, there is little difference between receiving news in person, through a Discord message, or via a meme on Instagram. The emotional, symbolic, and psychological weight can be equivalent regardless of the medium. For those working in the peacebuilding field, this insight carries enormous significance for how we think about and classify modalities and causes of conflict stemming from digital mediums.

More specifically, a compelling and up-to-date understanding of violence organized on digital mediums cannot artificially create a divide between “online” spaces and the “real world” or sharply distinguish “real world communities” from “online communities.” For many, they are one and the same. To understand the evolving relationship between digital communication platforms and violence in a smaller, angrier internet, the peacebuilding field must move beyond such binaries with roots in outdated conceptions of the internet.

The limit of misinformation, disinformation in peace, conflict

One particularly well-known example of a clear link between digital platforms and violence would be Facebook’s “failure to prevent its platform from being used to “foment division and incite offline violence” in Myanmar. Military officials in the Southeast Asian nation were behind a systemic campaign to target the Rohingya Muslim minority that resulted in murder, rape, and large-scale forced migration.

One solution that has been widely adopted—the use of digital warnings attached to posts flagging them as misinformation or state-sponsored media—can actually serve to deepen suspicion among people already predisposed toward such content. Simply by virtue of being flagged “dangerous or untrue” on platforms assumed to be hostile to any number of groups, such content paradoxically comes to be perceived as “truer” than unflagged content. The flag itself functions to draw attention and heighten excitement over people saying “dangerous things” rather than to neutralize falsehoods or slow the spread of misinformation.

Studies of misinformation and disinformation tend to focus on fact-checking and journalism as natural and obvious solutions to assess and, where appropriate, produce rational and cogent challenges to articulations of political and extremist violence.

The fact-checking approach to misinformation and disinformation comes with stark limitations. The popularization of “post-truth” as a pithy summary of our declining capacity to agree on basic facts and the spread of articles—such as this one—that shift the burden of countering misinformation and disinformation to individuals are symptoms of the failure of this paradigm to account for malignant state, non-state and corporate actors, who collaborate to create rigid, small digital landscapes highly dependent on advertising revenue that financially incentivizes the rapid spread of all information, regardless of its status as misinformation or disinformation.

Misinformation and disinformation are not flaws in the system; they are part and parcel of the fundamental structure of digital mediums. By design, algorithms cannot and do not differentiate information quality. Powerful actors in digital mediums have no incentive to police or remove misinformation or disinformation either, as this would fundamentally undermine the reach and spread of their platforms.

Furthermore, what the framework for misinformation and disinformation (and its prescription of fact-checking as a remedy) fails to appreciate is that violence organized on digital mediums is as much about group self-expression and identity affirmation as it is about people behaving violently due to incorrect or deliberately false information they find online. People commit acts of violence not simply because they are ill-informed but because they want to hurt people they dislike and find a convenient pretext for doing so.

For example, across the Middle East and North Africa, gender and sexual minorities are targeted by state authorities for social media posts that simply express who they are without any explicit political content or advocacy. Misinformation and disinformation are not behind this kind of state violence. Even if government authorities hold misconceptions about gender and sexual minorities (which are, in theory, “correctable” through exposure to better information), the violence would likely continue because this population is seen as a threat simply by virtue of their identity and is so weak they can be targeted without consequence.

The same holds true for peace activists in many countries around the world: state and non-state actors often perform acts of violence on peaceful protestors based on a wholly accurate understanding of viewpoints they perceive as wrong or dangerous, not in response to rumors and propaganda. This is the point at which the misinformation and disinformation approach, at least in studies of peace and conflict, fails to capture the ways digital media can generate violence.

Expanding our imagination

As we have been arguing throughout, in many respects, our current peacebuilding language falls short of capturing the contemporary digital experience, and this is one possible reason our policy prescriptions suffer the same fate. The terminology we use to discuss digital media remains optimistic and often speaks of the consequences of using technology and of technology—when in fact it would be more accurate to say that we live technology in nearly every domain of life, including war and peace.

Some options for improving the peacebuilding field’s approach to digital mediums, including the field’s response to misinformation and disinformation, among other malignant digital phenomena, include:

Update our understanding of the internet and rapid technological change as a form of “global shock”

The utopian idea of the internet is a long-gone fantasy. The internet is a rigid, tightly controlled, monitored, and tracked space. State and corporate actors are powerful and active in intervening across digital communities, for good and ill. The unchecked optimism and artificial barriers we often still assume to exist between the digital and physical worlds are both gone. We can no longer speak of “online communities” but must rather think in terms of communities with both digital and physical components. Analysis and practice in peacebuilding that fail to appreciate this shift will be painfully limited in their capacity to have enduring relevance and offer insight.

Furthermore, many digital spaces that encompass a malignant dimension (such as spreading misinformation and disinformation) often serve more benign and, sometimes, highly valuable functions within their communities. Social clubs and gaming or entertainment channels can become sites of recruitment or indoctrination for specific political and ideological agendas and function as platforms for extremist groups to generate financial and material support. The distinction between entertainment and terrorism is far less clear-cut than we might think.

Generate better understanding of national, transnational variations in internet cultures and their implications for conflict, peacebuilding

The utopian idea of the internet is a long-gone fantasy. The internet is a rigid, tightly controlled, monitored, and tracked space. State and corporate actors are powerful and active in intervening across digital communities, for good and ill. The unchecked optimism and artificial barriers we often still assume to exist between the digital and physical worlds are both gone. We can no longer speak of “online communities” but must rather think in terms of communities with both digital and physical components. Analysis and practice in peacebuilding that fail to appreciate this shift will be painfully limited in their capacity to have enduring relevance and offer insight.

Furthermore, many digital spaces that encompass a malignant dimension (such as spreading misinformation and disinformation) often serve more benign and, sometimes, highly valuable functions within their communities. Social clubs and gaming or entertainment channels can become sites of recruitment or indoctrination for specific political and ideological agendas and function as platforms for extremist groups to generate financial and material support. The distinction between entertainment and terrorism is far less clear-cut than we might think.

Across different countries, regions, and language groups, we see huge diversity in internet landscapes and cultures of information consumption. Too often, expertise on a country, region, or thematic issue, such as gender or religion, underappreciates these variations in digital landscapes. Understandings of such contexts are also often generated from specific user experiences rather than from comprehensive studies of distinctive and often idiosyncratic practices, injecting a degree of bias into research and writing.

In addition, approaches specifically to misinformation and disinformation vary considerably between non-state and state actors, and there is only limited research exploring the various strategies adopted by different types of organizations—and even less on effective peacebuilding strategies to counter them. Radical groups may use disinformation to alienate people from society as part of their recruitment efforts, while state actors may use disinformation to harm morale in targeted societies or misdirect enemy resources. These are different tactics, with different strategies, and require different solutions, all of which must move beyond “adding truth and stirring” to explore new forms of policy, programming, and regulation.

Create digital media programming specific to peacebuilding

Investing in programs and research specifically focused on the role of digital media in peace and conflict can generate the field-specific knowledge and insight necessary to build out new, technology-sensitive approaches to peacebuilding. Ensuring that these programs and tools closely track but remain independent of the key digital platforms will be vital to ensuring that they develop an unbiased capacity to assess how corporate, state, and non-state actors enable and facilitate violence across digital and physical spaces.

Contributed by Peter Mandaville (PhD) & Julia Schiwal

- Advertisement -

Fresh Topics

Related Articles