What causes such toxicity? What can we do about it? Both questions became the cornerstone of my dissertation project as a Master’s student studying behavioural science. This article outlines the answers, insights, and reflections that I found through my research—an honest attempt to reduce online toxicity in the digital space, writes Nurfitriyana Riyadi
_______________________________________________
I have been a loyal, active user of X (formerly Twitter) for more than a decade. I open the platform at least twice a day; in the morning when I am barely awake and at night before I go to sleep.For years, it has been my go-to platform for news, social updates, and text-based entertainment shaped by innocent public banter.
However, everything once changed during the recent 2024 Indonesian presidential election. Swarms of hostile, disrespectful, and/or unreasonable tweets from polarized voters exhausted me. I felt torn apart by the need to keep observing public discourse and the need to distance myself from its toxicity. In the end, heated political debates, filled with fiery, snarky, and hateful remarks, drove me away from the platform.
Such a personal experience illustrates how social media, our vast digital public sphere, is plagued by online toxicity. Online toxicity is defined as “a rude, disrespectful, or unreasonable comment” that possibly drives people away from an online discussion. It could be seen as a form of verbal aggression and incivility in communication. This phenomenon is notably evident in the interactions between Indonesian social media users, who are notorious for being among the most digitally uncivil in Asia-Pacific. According to past research,online toxicity is also commonly found in controversial and divisive political discussions, such as the ones I encountered on X during the last election period.
In an era in which people rely on social media to seek information, online toxicity poses a threat to people’s deliberation and decision-making processes. As it creates an antagonising and dismissive environment, online toxicity could eventually lower both the quality and quantity of materials that people access online. This highlights a pressing need for us to combat toxicity in cyberspace.
What causes such toxicity? What can we do about it? Both questions became the cornerstone of my dissertation project as a Master’s student studying behavioural science. This article outlines the answers, insights, and reflections that I found through my research—an honest attempt to reduce online toxicity in the digital space.
Past studies indicate that factors influencing online toxicity range from personal characteristics to the social aspects of computer-mediated communication. For instance, people from younger age groups with lower education levels and lower openness to experience are more inclined to be toxic online. The more frequently individuals participate in online political discussions and the more they think that it is acceptable to retaliate against a politically aggressive message on the internet, the more likely they express toxicity. Other factors also include echo chambers (i.e., social media algorithms separating people with opposing views), in-group and out-group polarization, anonymity, and depersonalization or deindividuation.
Depersonalization (or deindividuation—as scholars have used the two terms synonymously) is “a redirection away from one’s audience”. It is a condition in which individuals on the internet lack interpersonal information to distinguish between one another. As a result, it is harder to see that everyone online, including oneself, is a unique individual with their own identity.
The concept of depersonalization has invited many scholars to theorize about how toxicity develops in computer-mediated communication. Some propose that depersonalization makes people less inhibited, causing them to display toxicity since normative feedback that regulates civil behaviour is less apparent online.Conversely, when individuating information is inadequate, others suggest that depersonalization drives internet users to exercise their social identity instead—they identify with a social group in the online community, and toxicity arises when the group’s norms support such behaviour.
With depersonalization, individuals in the online setting seem to forget that their fellow internet users whom they interact with are also human beings with distinguishing characteristics in the physical world. What if we could make this audience factor more prominent in people’s minds, in a way that redirects their attention back to the humans on the other side of the conversation? This is where I experimented with behavioural science—to see if such a shift would impact online toxicity.
Behavioural science is essentially an intersection between economics and psychology. It has a core notion that human behaviour is not always rational, as it is influenced by biases, emotions, and social norms. Given this perspective, behavioural science can answer gaps found in economics related to how people make decisions. It also offers applied knowledge and methodology to drive behaviour change.
Intervention design and testing are central to the behavioural science approach. A behavioural science intervention typically works by altering the conditions under which decisions are made to trigger individuals’ mental shortcuts. Widely known as a“nudge”, this kind of intervention operates under the principle of libertarian paternalism—the intervention does not restrict individual choices, yet at the same time steers people into behaving in a certain direction. The impact of a nudge on behaviour is ideally evaluated through a rigorous experiment or randomized controlled trial (RCT).
What I did for my dissertation was essentially behavioural-science-coded. I ran a between-subject online experiment in June 2024, involving 192 Indonesian social media users, to test a nudge that was created to influence online toxicity. In a hypothetical, unnamed social media setting that resembled X, research participants saw a toxicity-triggering political post and replies posted by anonymous users. They were asked to choose how to respond—in a toxic or non-toxic way—as a measurement of online toxicity.The nudge was randomly shown to some participants as an extra disclaimer in the design, reminding them that other social media users are fellow human beings in the real world.

The proposed nudge demonstrates how we could utilize the salient principle to influence behaviour. In the behavioural science field, it is postulated that human rationality is bounded; people’s memory,attention, and cognitive processes are limited. Individuals are unable to register every single environmental stimulus or retrieve all pieces of information from their memory when forming a judgement. As such, a certain stimulus in their surroundings could be made salient to capture their selective attention, so that it could have a stronger influence on judgement and decision.
By adding an extra disclaimer in the mock social media design, I intended to increase the salience of one’s online audience. It was hypothesized that it could help people to better imagine (and better empathize with) other social media users. Consequently, it might lower depersonalization and affect online toxicity in the end.
Results from my experiment suggest that the proposed nudge did not significantly impact online toxicity among Indonesian social media users. This leads us to several practical insights and reflections, especially about the importance of evaluating behavioural interventions—something strongly advocated by behavioural science.
First, there is value and significance in doing experimentation. A behavioural intervention to battle online toxicity might take various forms in various settings: it might be a small feature released on a social media platform or a social development programme administered in a community. Through RCTs, we could assess the intervention’s impact and effectiveness before scaling it up to a wider population, saving resources and even mitigating unintended consequences (if any) in the process.
Second, it is crucial for us to continuously learn from each implementation and keep iterating our approach to influence behaviour. From my experiment alone, for instance, the insignificant impact of the nudge might be due to the disclaimer being not visually salient enough to capture individual attention and/or not simple enough for participants to digest its key message. Therefore,the nudge could be made more visually striking and intrinsically simpler to grasp in future studies. Other types of nudges, such as introducing social norms or reminding people about the consequences of their past behaviour, could also be tested to tackle the same toxicity issue.
This research also shows that lowering depersonalization(and consequently reducing online toxicity) is not an easy task. Assuming that the disclaimer succeeded in reminding participants that their conversation partners are humans, there was still a lack of identifying information for each person, hence recognizing the individuality of oneself and others in the online space might remain difficult. This challenge has probably intensified with the growing presence of social bots (i.e. computer algorithms that automatically generate social media posts) and buzzers (i.e. groups of mass social media users hired by political actors and governments to control narratives online), which adds a sense of inauthenticity in computer-mediated communication.
Online toxicity is a challenging problem to address. Entirely eliminating it feels rather unrealistic, since from a different point of view, online toxicity could be seen as a natural form of self-expression. It is also arguably needed, to a certain extent, to spur conversations and democratic processes. Nevertheless,any attempt to minimize online toxicity is worth a shot to foster a more constructive online environment, and behavioural science could be among the possible solutions to guide such an effort.
______________________________________________
*About the research: This blog is based on the Author’s dissertation for her LSE MSc Behavioural Science, for which the Author was awarded SEAC’s Dissertation Fieldwork Grant.
*The views expressed in the blog are those of the authors alone. They do not reflect the position of the Saw Swee Hock Southeast Asia Centre, nor that of the London School of Economics and Political Science.
Photo by Gilles Lambert on Unsplash