Edited Volume Read Along: How the Digital World Drives Real World Conflict

The impact of developing technology on human life and society is so massive that it cannot be understated. It influences all areas of life for communities around the world. Despite this ubiquity, peacebuilders often think of technology as separate from the realm of and forces that generate conflict in the physical world. One way to conceptualize technology is as a tool, implicitly neutral but capable of being used for great good or great evil, depending on how it is wielded. In their chapter “Understanding Digital Conflict Drivers,” authors Helena Puig Larrauri and Maude Morrison present a different framework through which to view technology. In their view, technology is not separate from conflict but operates as an integral part of the forces that drive conflict in the modern era. Technology is not inherently bad, but as a significant part of life, it is also a significant part of conflict. 

According to the framework laid out by this chapter, technology can drive conflict in three major ways. The first is Strategic Communication which is the use of digital technologies to create and spread content, such as hate speech, misinformation, and disinformation. Hate speech lacks a solid international definition, but the UN defines it as “any kind of communication in speech, writing or behavior, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factors” (Larrauri and Morrison, 2022, p. 175). This article also differentiates between misinformation and disinformation via intent. Misinformation is untrue or incorrect spread without intent to deceive, while disinformation is spread intentionally to give people false information. 

The second way that technology can drive conflict is through data management. This can be used to target and accelerate the spread of content. Algorithmic profiling, targeting, and surveillance are all tools that can be used to funnel divisive content and ideas to those who will be most receptive and easily polarized. 

The third tactic is labeled networking. This describes the ways that technological networks help form groups and push those groups further apart. This can include affective polarization, which plays on emotion. It can also include things like divisive identity formation, where people define themselves in opposition to another group, or active recruitment into violence. 

Fig. 9.2 Pyramid of digital conflict drivers and respective peacebuilding responses (Larrauri and Morrison, 2022, p.188)

 

Within their framework, Helena Puig Larrauri and Maude Morrison describe ways that these forces that drive conflict can be mitigated. They structure their plan as a pyramid, with the most surface level and visible aspects at the top and the deepest, most intrinsic features forming the base. At the top in level one, they see the main problems as hate speech, overt targeting, and active recruitment. These can be countered with measures stemming from the platforms in use, including terms of service and platform rules, as well as official content moderation. At level two, the issues are defined by things like disinformation, identity polarisation, and convert targeting. The phenomenon of manufactured consensus, when bots are used to artificially make it seem that a fringe take is more widely accepted than it is, is also included at this level. Mitigation measures can include things like debunking disinformation, social cohesion campaigns, and integrating social media into peace agreements. 

At level three, the issues become much deeper-seated in the design of social media and individuals’ psyches. They include identity construction, affective polarization, algorithmic profiling, and intentional misinformation. These things can only be combated by overarching strategies like updating platform design and increasing digital literacy within a population. Policy responses and depolarization efforts can also be employed. It is also at this level where the negative impact of the algorithms that show you more and more of what you personally want to see can be harmful. To combat this echo chamber effect, algorithms would need to be redesigned. Additionally, countering the algorithm can be useful. AdWords on Google are advertisements that can be targeted at users who search for certain words or phrases. Companies and nonprofits can buy AdWords that show information debunking extremist recruitment directly to people who are searching terms related to extremist violence or groups. 

Level four involves societal level transformations such as improving non-violent communication, social cohesion, and mental health. While the pyramid is a good way to show the potential impacts and solutions to technology driving conflict, it also shows just how integral technology can be to the working lives and minds of people today.

As with anything that has the potential to cause and exacerbate conflict, technology needs to be deeply examined and used carefully. The framework presented here is one way to accomplish that goal. The digital world should not be seen as entirely separate from physical violence and conflicts. Each year the digital sphere becomes more entwined with daily life, and society in general. This shift necessitates that our framework for conceptualizing the role of technology must continue to evolve as well. 

About the Author:

Stella Hudson is a Graduate Assistant with the Baha’i Chair for World Peace. She graduated from the College of William and Mary in 2021 with a B.A. in English. She is attending the University of Maryland and pursuing a Master’s of Library and Information Science.

 

Leave a Reply

Your email address will not be published. Required fields are marked *