One of the most active Twitter accounts during the 2012 Quebec election tweeted 11,000 times to support the nascent Coalition Avenir QuĂ©bec (CAQ). But the account named CAQBot wasn’t human. It was a bot, an automated software program designed to mimic human interactions on social media. The use of CAQBot is a sign that campaigning is changing in Canada, whether parties are ready or not. The CAQ, for its part, didn’t even program the CAQBot, a follower did so. With bots on the scene and with the growing artificial intelligence capabilities of computers, political parties need to establish a code of conduct for their digital campaigning.

Digital campaigning and democracy

Email blasts and social media posts from political parties have been part of politics for a while, but digital campaigns are evolving because computers and the Internet are changing. The latest developments are offering new possibilities for promotion, engagement and targeting of voters. Proponents believe targeting engages more Canadians in democracy, but the benefits might soon be outweighed by the risks.

The evolution of artificial intelligence in particular is changing digital campaigns, and we aren’t prepared for it. Bots and other forms of computational propaganda are automated tools for sending politically motivated messages to citizens. They are contributing to and influencing public discourse and the formation of political opinions. In addition, parties are using a set of statistical techniques called predictive analytics to decide who is worth talking to, and who isn’t.

Unfortunately, political parties have been prone to “go negative” in past digital campaigning. The Conservative Party has long relied on attack websites, one of which had a puffin poop on Liberal leader StĂ©phane Dion. The strategist Warren Kinsella ran a seemingly “user-generated” YouTube channel on behalf of the Liberal Party in the 2007 Ontario election. It included videos of the Progressive Conservative leader John Tory making a number of gaffes. Someone even went further than negative messaging and attacked the voting infrastructure of the NDP during its 2012 leadership race. The list goes on.

Bots and predictive analytics don’t have to deepen negativity. Used responsibly and with the public interest in mind, they can be beneficial. What we need is a code of conduct for digital campaigning that requires political parties to identify their use of tools like bots and predictive analytics, so that observers can check whether those tools are being used appropriately.

Manipulating public discourse: Political bots

Bots are already a part of Canadian politics. Our research found bots amplifying alternative news sites during the 2015 federal election. We suspect (on the basis of research by Mentionmapp) that someone hired bots in the 2017 BC election to try, unsuccessfully, to increase their own visibility on the public hashtag for local politics, #BCPoli. This is one example of what we call amplifier bots. They can inflate social media rankings, duping reporters, the public and even parties and politicians into thinking a post has more support than it actually does. For the most part, amplifier bots are just a nuisance, but they can be troubling if media coverage continues to rely uncritically on social media statistics.

Bots can also dampen voices online. Benjamin Perrin, a University of British Columbia law professor, found this out the hard way after tweeting critically about former PM Stephen Harper. Bots quickly tweeted at him with a flood of automated attacks. Dampener bots are working online to intimidate and bully certain people, voices and websites in order to force them offline.

Through a code of conduct, all parties should agree to make public when they use political bots so that observers can understand and assess their role.

The inner workings of government
Keep track of who’s doing what to get federal policy made. In The Functionary.
The Functionary
Our newsletter about the public service. Nominated for a Digital Publishing Award.

That said, bots can be useful. A bot that broadcasts legitimate party announcements to social media accounts and is explicitly tied to the party is not likely to cause harm. Voters can talk to some bots, asking questions to learn about a party’s platform. Through a code of conduct, all parties should agree to make public when they use political bots so that observers can understand and assess their role.

Some bots are unavoidable. Bots follow just about everyone on Twitter, and politicians often garner more attention from bots because of their popularity. Political parties may have no control over this type of amplifier bot. Implementing a code of conduct would at the very least force political parties to consider and discuss the role of bots in social media. It would create space for an important conversation and reflection that would increase digital media literacy among political actors and help parties avoid being gamed by bots.

Like many activities in cyberspace, the use of bots can be difficult to trace. Bots might allow third parties (people or groups other than candidates or parties that are advertising during an election) to amplify certain political messages or dampen others without the consent of their authors. Parties could find themselves the beneficiaries or targets of bots without knowing who programmed them. With a code of conduct that requires political parties to be transparent about using bots, they at least can be removed from the list of suspects when unidentified bots turn up.

The potential bias of predictive analytics

Most parties use some form of predictive analytics to examine the political data they have collected — profiles of voters and their past interactions with the party — and make predictions about voters’ behaviour. Either the party or, more often, a consultant analyzes the data to calculate the probability that each voter will support the party and the probability that the voter could be persuaded to vote for the party. The parties use these results to make important decisions like who to target and who to encourage to vote. At worst, predictive analytics exacerbates low voter turnout in Canada, allowing parties to continue to distance many voters from the elections process. Because political data are largely unregulated in Canada (control is left to the parties and politicians), predictive analytics has been introduced without much oversight. A code of conduct for parties should ensure that predictive analytics and future implementations of artificial intelligence comply with emerging best practices for fairness, accuracy and accountability in machine learning and algorithmic decision-making.

Parties should agree to audit their scoring of voters and their other analytics for race or gender bias as well as to ensure that their decisions about which voters to contact and which to ignore are auditable by Elections Canada or Measurement Canada. With transparency, analytics algorithms could be trusted advisers to politicians, helping them use data to encourage a more active democracy without concern that their digital gaze might be skewing their view of the Canadian public.

A code of conduct could go a long way toward bringing to light the use of digital tools such as bots and predictive analytics and ensuring that the public interest is kept front and centre. Elections Canada should convene political parties and their data brokers, as well as public interest groups including academics and media literacy organizations, for a consultation process that will lead to a set of guidelines. Elections Canada has considered a code in the past. We still have time before the 2019 federal election to establish a code of conduct.

Photo: Shutterstock, by Zapp2Photo.


Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous rĂ©agir Ă  cet article ? Joignez-vous aux dĂ©bats d’Options politiques et soumettez-nous votre texte en suivant ces directives.

Fenwick McKelvey
Fenwick McKelvey is an associate professor in the Department of Communication Studies at Concordia University. He is co-director of the Applied AI Institute and the Machine Agencies Working Group at the Milieux Institute. Twitter @mckelveyf
Elizabeth Dubois
Elizabeth Dubois is an assistant professor at the University of Ottawa and a fellow at the Public Policy Forum.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License