Violent extremists and those who subscribe to radical beliefs have left their digital footprints online since the inception of the World Wide Web. Notable examples include Anders Breivik, the Norwegian far-right terrorist convicted of killing 77 people in 2011, who was a registered member of a white supremacy web forum and had ties to a far-right wing social media site; Dylann Roof, the 21-year-old who murdered nine Black parishioners in Charleston, South Carolina, in 2015, and who allegedly posted messages on a white power website; and Aaron Driver, the Canadian suspected of planning a terrorist attack in 2016, who showed outright support for the so-called Islamic State on several social media platforms.

It should come as little surprise that, in an increasingly digital world, identifying signs of extremism online sits at the top of the priority list for counter-extremist agencies. Within this context, researchers have argued that successfully identifying radical content online, on a large scale, is the first step in reacting to it. Yet in the last 10 years alone, it is estimated that the number of individuals with access to the Internet has increased threefold, from over 1 billion users in 2005 to more than 3.8 billion as of 2018. With all of these new users, more information has been generated, leading to a flood of data.

It is becoming increasingly difficult, nearly impossible really, to manually search for violent extremists, potentially violent extremists or even users who post radical content online because the Internet contains an overwhelming amount of information. These new conditions have necessitated the creation of guided data filtering methods, which may replace the laborious manual methods that traditionally have been used to identify relevant information.

Governments in Canada and around the globe have engaged researchers to develop advanced information technologies, machine-learning algorithms and risk-assessment tools to identify and counter extremism through the collection and analysis of big data available online. Whether this work involves finding radical users of interest, measuring digital pathways of radicalization or detecting virtual indicators that may prevent future terrorist attacks, the urgent need to pinpoint radical content online is one of the most significant policy challenges faced by law enforcement agencies and security officials worldwide.

We have been part of this growing field of research at the International CyberCrime Research Centre, hosted at Simon Fraser University’s School of Criminology. Our work has ranged from identifying radical authors in online discussion forums to understanding terrorist organizations’ online recruitment efforts on various online platforms. These experiences have provided us with insights we can offer regarding the policy implications of conducting large-scale data analyses of extremist content online.

First, there is much that practitioners and policy-makers can learn about extremist movements by studying their online activities. Online discussion forums of the radical right or social media accounts of radical Islamists, for example, are rich with information about how members of a particular movement communicate, how they construct their radical identities, and who they are targeting — discussions, behaviours and actions that can spill over into the offline realm. Exploring the dark corners of the Internet can be helpful in understanding or perhaps even predicting trends in activity or behaviour before they happen in the offline world. If, for example, analysts can track an author’s online activity or identify an online trend that is becoming more radical over time, analysts may be in a better position to assist law enforcement officials and the intelligence community. At the same time, it is important to note that online behaviour often does not translate into offline behaviour; authorities must proceed with caution to ascertain the specific nature of an instance of online activity and the potential threat it poses.

Second, practitioners and policy-makers can gain valuable information about extremist movements by utilizing computational tools to study radical online activities. Our research suggests that it is possible to identify radical topics, authors or even behaviours in online spaces that contain an overwhelming amount of information. Signs of extremism can be found by drawing upon keyword-retrieval software that identifies and counts a specific set of words, or sentiment analysis programs that classify and categorize opinions in a piece of text. Large-scale, semi-automated analyses can provide practitioners and policy-makers with a macro-level understanding of extremist movements online, ranging from their radical ideology to their actual activities. This understanding, in turn, can assist in the development of counter-narratives or deradicalization and disengagement programs to counter violent extremism.

The inner workings of government
Keep track of who’s doing what to get federal policy made. In The Functionary.
The Functionary
Our newsletter about the public service. Nominated for a Digital Publishing Award.

We must caution practitioners and policy-makers that our work suggests there is no simple typology or behaviour that best describes radical online activity or what constitutes radical content online. Instead, extremism comes in many shapes and sizes and varies with the online platform: some radical platforms, for example, promote blatant forms of extremism while other platforms encourage their subscribers to tone down the rhetoric and present their extremist views in a subtler manner. Nonetheless, a useful starting point in identifying signs of extremism online is to go directly to the source: identifying topics of discussion that are indeed radical at the core — with language that describes the “enemies” of the extreme right, for example, such as derogatory terms about Jews, Blacks, Muslims or LGBTQ communities.

Lastly, in order to gain a broader understanding of online extremism or to improve the means by which researchers and practitioners “search for a needle in a haystack,” social scientists and computer scientists should collaborate with one another. Historically, large-scale data analyses have been conducted by computer scientists and technical experts, which can be problematic in the field of terrorism and extremism research. These experts tend to take a high-level methodological perspective, measuring levels of — or propensity toward — radicalization or ways of identifying violent extremists or predicting the next terrorist attack. But searching for radical material online without a fundamental understanding of the radicalization process or how extremists and terrorists use the Internet can be counterproductive. Social scientists, on the other hand, may be well-versed in terrorism and extremism research, but most tend to be ill-equipped to manage big data — from collecting to formatting to archiving large volumes of information. Bridging the computer science and social science approaches to build on the strengths of each discipline offers perhaps the best chance to construct a useful framework for assisting authorities in addressing the threat of violent extremism as it evolves in the online milieu.

This article is part of the Ethical and Social Dimensions of AI special feature.

Photo: Shutterstock, by Gorodenkoff.


Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous réagir à cet article ? Joignez-vous aux débats d’Options politiques et soumettez-nous votre texte en suivant ces directives.

Ryan Scrivens
Ryan Scrivens is a Horizon Postdoctoral Research Fellow at Concordia University and a visiting researcher at the VOX-Pol Network of Excellence.
Garth Davies
Garth Davies is an associate professor in the School of Criminology at Simon Fraser University and the co-director of the online Terrorism, Risk, and Security Studies Professional Master’s Program there.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License