Right from the start of the 2021 election campaign, the political parties’ social media strategies have been in full swing. Justin Trudeau was masked and bumping elbows with children on Instagram (where he has four million followers); Annamie Paul retweeted Margaret Atwood’s tweet to demand a leaders’ debate on climate change; Erin O’Toole was on Facebook posting video rebuttals to perceived gaffes by Trudeau, and Jagmeet Singh has continued to build his popularity with younger demographic groups as the dominant Canadian politician on TikTok.

As candidates across the country push their carefully orchestrated videos and chatty posts, they often receive hostile responses. To better understand this problem, we researched online abuse in the 2019 federal election campaign. We interviewed more than 30 candidates and staff; we used a novel machine-learning model to analyze more than a million tweets directed at candidates; and we made recommendations on how to mitigate the problem. We found that 40 per cent of the tweets were negative; only 7 per cent were positive (figure 1). The more prominent the politician, the greater the volume of negative messages they received (figure 2). (For more details, including reflections on online harassment by five women in politics, see our report Trolled on the Campaign Trail: Online Incivility and Abuse in Canadian Politics, published by the Centre for the Study of Democratic Institutions at the University of British Columbia).

Online attacks can threaten candidates’ security and undermine their psychological health. The communications officer for a very prominent candidate recounted that the candidate “received not terribly infrequent death threats.” Some candidates told us they were anxious about what online trolls might do next, or felt demoralized by insults from apparently random members of the public. (Others, it should be acknowledged, said they do not find online abuse to be a major problem.) The psychological toll extends beyond the candidate. Much of the labour of evaluating, hiding or reporting online abuse falls upon political staff, some of whom described this work as an occupational health and safety hazard.

Online abuse can also undermine the quality of candidates’ engagement with the public. “Social media abuse is designed to take energy and time away from a campaign and to demoralize,” a former cabinet minister told us. Many candidates respond by limiting their personal engagement on social media, sometimes at the behest of protective staff. They use social media as a way to broadcast messages rather than to interact with users. Furthermore, research suggests that members of the public are less likely to engage in productive discussion or seek credible information when they encounter uncivil messages on social media platforms.

These issues exacerbate the obstacles faced by women, LGBTQ folks and racialized individuals who seek office. Almost every racialized or female candidate we interviewed told us that they faced online abuse that attacked their identity. Many saw misogynist or racist attacks on prominent candidates – such as Liberal cabinet minister Catherine McKenna – as sending a broader message about who belongs in politics and who doesn’t. Such hostility has pushed some women politicians to leave public life; this has happened in Canada and the United Kingdom.

There are many causes of online toxicity. Elections rouse the animal spirits among partisans, particularly when the parties themselves stoke fervor and hostility through attack ads and displays of contempt. Social media platforms make it easy for users to instantaneously lob insults. It is difficult to hold the authors to account, even for rarer but more dangerous communication like hate speech and threats.

So what can be done?

To mitigate the problem, we need multiple approaches, which could be pursued by candidates, political parties, social media platforms and their users, and legislators.

The FunctionaryThere’s a lot going on in the public service.

Stay in the know with veteran reporter Kathryn May. Sign up for routine and out-of-the-ordinary news about the public service with The Functionary, our new newsletter.

Candidates could post their own social media policies, so that followers know what might get them blocked or reported. They could also develop plans to manage abuse. For example, they could designate and train more than one staff member to deal with abusive posts, which would spread out the burden. Civil society groups like ParityYEG, Operation Black Vote Canada and the Samara Centre for Democracy have been training prospective candidates to address abuse, and raising awareness about the issue.

Parties put considerable resources into developing social media campaigns, but candidates in the last election told us that they received little training or assistance to address online abuse. Political parties could ensure that they have staff on hand to help any candidate who is the target of online abuse. Otherwise, staff will spend more time on social media than on managing the actual campaign. Parties could develop best practices and explore new technologies (such as Block Party) to help candidates protect themselves. They could also set clear expectations for how their candidates and staff behave online by creating guidelines and enforceable codes of conduct. This might help to stop partisan supporters piling on opponents, something that several candidates in the 2019 election suspected had played a major role in provoking online abuse.

In the short term, social-media platforms should make it easier to report multiple abusive posts simultaneously. They could also ensure that they respond swiftly to candidates’ notifications of problematic posts. Enforcement of platform policies often seems arbitrary or quite uneven. Content moderators may not all understand the Canadian context, and that might prevent the detection of specific terms of abuse. While companies like Facebook have created more resources for Canadian candidates since 2019 (including a safety guide for women designed in partnership with the civil society organization Equal Voice), it will be important to hear from candidates whether these resources improved their online experience in 2021, and what more they would like to see done.

In the long term, platforms could improve transparency around patterns of abuse and how they are addressing online hate. In the European Union, legislators have already taken steps to mandate this transparency through the draft Digital Services Act. After the election, Canadian legislators could consider similar steps. This will ensure that any further policies to deal with online abuse are evidence-based. (As we have argued elsewhere, greater transparency could also help us evaluate the impact of proposed legislation on platforms, such as recent proposals put forward by the Liberal government).

Finally, individual Canadians can play a role. When engaging on social media, they can take a deep breath before posting, and ask themselves what is the purpose of their post. Robust debate around policy issues is a core tenet of democracy, but we can all consider whether our posts, perhaps unintentionally, can slip into personal insults and attacks. We can think about creating constructive conversations online and what our own posts should look like to generate that online environment.

Elections are never free of spin or personal attacks. But the current level of online abuse is deeply concerning, particularly when it detracts from candidates’ ability to campaign, and when it puts off Canadians from wishing to seek office. The good news is that there are plenty of ways to alleviate this problem. It’s not too late to prevent some trolling on the campaign trail.

This article is part of the How can we improve the elections process special feature. 

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission, or a letter to the editor. 
Chris Tenove
Chris Tenove is a postdoctoral researcher at the University of British Columbia. He has published peer-reviewed articles, book chapters and policy reports on topics including disinformation, online harms and social media regulation
Heidi Tworek
Heidi Tworek is associate professor of international history and public policy at the University of British Columbia. She is a non-resident fellow at the German Marshall Fund of the United States and the Canadian Global Affairs Institute, and is also a senior fellow at the Centre for International Governance Innovation.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License