Online discourse can be both shockingly and trivially toxic. Abusive and hateful messages, misinformation and trolling are all appalling, hurtful and poisonous in online conversations. At the same time, a large amount of toxicity is trivial, carried out by people like you and me. The German-American political philosopher Hannah Arendt warned of the banality of evil, enacted through borrowed expressions and clichĂ©s by average people in Nazi Germany. Arendt proposed that extreme ideas had been normalized thanks in part to Nazi slogans and commonplace expressions assumed and repeated without much thought by everyday people. The pervasive online toxicity of today has many ordinary perpetrators as well — ordinary people being ordinarily toxic.

Undoubtedly, plenty of bad actors exist. Alt-right, extreme-right and extreme-left posters are distributors of toxic messages. Angry misogynists took part in “Gamergate,” a harassment campaign against women in the video game industry. Political actors, home-grown and foreign-based, with both obscure and clearly defined goals, subtly undermine and openly disrupt. Garden-variety trolls delight in trolling. It is important, though, to remember that toxicity is probably not as pervasive as you think it is. Thanks to the human cognitive failing known as negativity bias, we tend to perceive negative events as more strongly and widely negative than they really are. Negativity bias makes that one negative comment on your Facebook, Twitter or Instagram post drown out all the other positive feedback.

The phenomenon I am concerned with here, however, is the equal opportunity nature of toxic behaviour. Given the right circumstances, anyone can become a troll. We have, it seems, all found ourselves in the right circumstances all too often.

The antagonistic side of the internet has been known since its early days. As far back as 1996, Ellen Spertus developed a system to automatically detect email messages likely to start ”flame wars.” We have been warned many times by Spertus, activists, women, Black feminists and many scholars. It seems, perhaps, like a recent phenomenon because it has grown to volumes not seen before, and because we are finally paying attention. We are paying attention because we see our most fundamental institutions being threatened.

Straightforward explanations for online toxicity abound: anonymity, lack of face-to-face interaction, or context collapse. Anonymity is a much-discussed factor, involving fake accounts and bots and garden-variety trolls. While it’s comforting to think that some unknown enemy is responsible, a 2017 study of Wikipedia comments found that more than half of the personal attacks against Wikipedia editors were perpetrated by registered contributors, who are more easily identifiable.

Anyone who has witnessed abusive discussions with friends and relatives on Facebook knows that many people do not shy away from having their very public identity associated with toxicity and harassment. Rather than anonymity exclusively, the entire online disinhibition effect is more likely responsible: the lack of visibility and the lack of face-to-face interaction and visual cues that place us one step removed from one another.

Another reason for the banal nature of online commenting is the business model on which the internet was built, especially the turbo-charged targeted advertising structure that has dominated since the beginning of this century. Advertising requires attention, and attention and engagement are higher when they are the result of negative emotions such as anger or fear.

We should not neglect, of course, the probable causes of anger and toxicity online that have their roots in the offline world, including mental health issues, the steady decline in civic participation, the rise in inequality and the inevitable turmoil that this inequality brings about.

Malicious actors, anonymity, online disinhibition, the business model of the internet and inequality play an unquestionable role in the spread of online toxicity. I would argue, however, that it is our indifference and our tolerance that most urgently need addressing.

Arendt’s depiction of the banality of evil is so powerful that it has been useful in describing the banality of misogyny, the banality of sexual violence, or the banality of bureaucracy. Sarah Hawkes, in her review of Kate Manne’s Entitled: How Male Privilege Hurts Women, points out that “the notion that evil flourishes when we are thoughtless, when we don’t question, or when we lose our empathy is as relevant today as when Arendt wrote from a Jerusalem courtroom.”

The banalization playbook has been recycled many times since the Nazis. In Rwanda, the medium was the radio and the toxicity involved dehumanization of an ethnic group. In Myanmar, the medium was Facebook and the toxicity involved comparing Muslims to ”mad dogs.” The “Dangerous Speech” project has widely documented how casual abusive language, left unchecked, becomes dangerous speech.

We command whatever medium is available at the time. It’s not the internet; it’s us.

The inner workings of government
Keep track of who’s doing what to get federal policy made. In The Functionary.
The Functionary
Our newsletter about the public service. Nominated for a Digital Publishing Award.

The COVID-19 pandemic seems to have exacerbated trends toward individualism and lack of connection and engagement with the spaces and the people around us in the physical world. We are seeing the contagion transition from online harms to real-world events. The Samara Centre for Democracy was following toxicity online during the 2021 Canadian federal election. One of its reports found that Prime Minister Justin Trudeau received the highest proportion of toxic tweets of all candidates. This came the week before protesters disrupted one of Trudeau’s campaign rallies. While protest is a natural part of democracy, the anger in the protests prompted the cancellation of the rally over security concerns. The signs and the messages in the protests seem to have originated in online forums.

It is also worth mentioning that online abuse is not evenly distributed. We know that abusers target specific groups, as consistently shown in studies of online toxicity.

Joseph Reagle’s study of online comments classifies harmful behaviour online into two categories: “bad people acting up” and “good people acting badly.” The former are a loud minority with a disproportionate impact thanks to cognitive shortcomings such as the negativity bias and platform failings such as the algorithmic amplification of anger, because anger generates more engagement.

We need to deal with bad people acting up, because they spread misinformation, create toxic content and pollute our common discourse. Fortunately, there are smart people thinking hard about this. The recent proliferation of research institutes has resulted in insightful reports and policy proposals about content moderation and platform governance. Entire books are being written about how to minimize the nefarious effects of the internet’s business model and how to at least understand, if not address, the inherently racist, sexist and discriminatory architecture of many modern algorithms.

Yet we are not talking enough about good people acting badly. We need to work on developing cultures conducive to minimizing online harm of the more everyday variety, because the true power of abuse lies in the fact that, even when it starts at small levels, its mere presence endows it with the potential to become banal.

This can be done top-down, with platforms designing public spaces that encourage pro-social and participatory behaviour and institutions creating and enforcing frameworks and rules for democratic participation. But it fundamentally needs to be done bottom up, with citizens working together, as part of communities large and small, to acknowledge that online toxicity has become banal and to understand the real-life consequences of such banality.

To be clear, this is not a call to a false sense of civility, one where important discussions do not take place because we are afraid to hurt each other’s feelings. Nor is it about tone policing, which is harmful when used as an instrument of power.

Many of the hate activities organized online are promoted by bad people acting up. They draw their power, however, from the good people who are angry and frustrated by the many complexities of modern life. It is those good people, sometimes you and me, who have allowed online toxicity to become banal.

Toni Morrison said language is agency, an act with consequences, and that “doing language” may be the measure of our lives. What we do with language and how we weaponize it may be the measure of our society.

We need to talk about how we talk to each other.

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission, or a letter to the editor. 

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License