(Version française disponible ici.)
Relations between racialized groups and national security and intelligence institutions in Canada – such as the Canadian Security Intelligence Service, the Royal Canadian Mounted Police and the Canada Border Services Agency – have historically been marred by mistrust and suspicion. This is damaging to our national security because such mistrust damages the societal resilience that is essential to counter many of today’s national security threats.
It is in this context that the National Security Transparency Advisory Group (NS-TAG) that we co-chair – established in 2019 as part of a series of broader reforms to the country’s national security and intelligence architecture – chose to focus its third and recently released report on the transparency and relations between racialized groups and national security institutions.
In this report, we make recommendations on how national security and intelligence institutions can be more transparent in their engagement with racialized communities. Engagement can help government to understand specific needs, identify local voices, open and build dialogue with them, and build trust and a shared understanding of common challenges. Engagement also provides a bridging function. Engagement programs work on behalf of multiple parts of the government, exchanging information with external stakeholders and bringing it back inside the government to ideally feed into policy and operational processes.
The NS-TAG, composed of 10 members with diverse backgrounds from academia, civil society and retired public servants, is an external advisory group providing advice to the deputy minister of public safety (and to other national security departments and agencies) on how to improve transparency. It also aims to increase public awareness, engagement and access to national security information.
Throughout consultations during the past three years, we frequently heard about a trust gap between the country’s national security institutions and Canadians, in particular racialized Canadians. At times, these relations have been marred by mistrust and suspicion, and by errors of judgment by these institutions, which racialized communities have perceived as discriminatory.
Engagement with racialized communities needs to involve a two-way conversation. As we heard in our consultations, past efforts at engagement too often involved government officials offloading a prepared message and failing to listen to the concerns of stakeholders. Constructive engagement should instead be based on dialogue. Officials should be attuned to the questions and concerns of stakeholders, should listen to them and should be prepared to respond.
For such engagement to be feasible, deeper structural challenges in national security institutions must be addressed. As such, our report also offers recommendations on these broader issues, notably on how to enhance diversity and inclusion, and how to make complaint mechanisms more accessible to vulnerable groups.
As digitization accelerates, the data-driven dimensions of national security continue to expand. As a result, the national security apparatus will become more dependent on algorithmic methodologies and digital tools to gather and process massive data holdings – a reality that the COVID-19 pandemic has accelerated. It is clear, however, that systemic biases in artificial intelligence (AI) design can have perverse impacts on vulnerable groups, notably racialized communities. These biases reflect not only specific flaws in AI programs and organizations using them, but also underlying societal cleavages and inequalities that are then reinforced. As we heard during our consultations, this further erodes trust in national security agencies and inhibits effective relationship-building. As the federal government looks at reasonable and appropriate uses of AI, we need to ensure that its impact on racialized communities is not disproportionate and does not perpetuate biases.
There is therefore growing agreement that many aspects of openness and engagement play a vital role in ensuring accountability and effectiveness in current and future AI deployments. Among other recommendations, we suggest that as part of their regular transparency reporting, national security agencies should provide details of their AI activities as well as their efforts to mitigate unintended consequences and systemic biases within such systems.
In light of the growing attention to AI use, there can be no excuse for inaction in seeking greater transparency. The relationship between transparency and trust is complex and fraught with risks – in general and especially in the realms of national security and digitization. The counter-risks, however, stemming from opacity and insularity are much greater. There is an opportunity to be proactive and thoughtful in proceeding down this path, and there is a need to do so with greater sensitivity to those communities who have good reason to be suspicious in light of historical biases and missteps that governments now recognize and seek to address.