The COVID-19 pandemic has brought to light the usefulness of artificial intelligence (AI) solutions in decision-making. Models and data, however, should be governed thoughtfully in order to protect the most vulnerable populations.

Much of the research in this domain has focused on middle- to high-income countries. However, as the virus continues to spread, poorer countries are not being spared. Countries with lower income levels also have more vulnerable populations in a general sense – vulnerabilities that are likely to be exacerbated by inappropriate uses of artificial intelligence.

Research organizations have used modelling techniques to predict the timing and severity of the crisis and to compare impacts of different possible intervention scenarios. These epidemiological models typically have a compartmental structure, where the model is divided into four broad categories – susceptibility, exposure, infectivity and recovery (or death). The researchers calculate the percentage of the population at each stage, over time. There is also variation within each compartment. For example, people living in more dense areas, such as shelters or refugee camps, can experience higher rates of exposure. Those with respiratory vulnerabilities, chronic disease or the elderly can experience more severe infections and lower rates of recovery.

Modelling the impact of interventions has several objectives. Firstly, having a sense of the number of people requiring critical care, as well as the timing of the peak number of cases, allows governments to allocate hospital resources appropriately. Secondly, it allows for the theoretical testing of different strategies, in order to “flatten the curve,” which slows the infection rate and reduce the overall number of cases at the peak. These strategies have been implemented, often based solely on the model results.

For example, according to the Food and Agriculture Organisation (FAO), more than one-third of the population of Afghanistan is considered severely food insecure, a condition that, would make them more vulnerable to infection than their better nourished counterparts. In addition, the economic impacts of COVID-19 interventions are beginning to exacerbate food insecurity. Modelling methodologies therefore should consider the feedback mechanism between the disease and rates of vulnerability.

While age is not a risk factor that can be reversed, food insecurity most certainly is, as well as other risks such as access to healthcare, clean cooking fuel, or sanitation. Models that include these kinds of contextual parameters will provide realistic and attainable options for governments trying to mitigate the crisis.

The responsible use of data is also an important component of accountable AI in the COVID-19 pandemic. This is of relevance in situations of vulnerability where individuals do not feel they are able to consent to breaches of their privacy and the use of their personal data.

As highlighted by the Office of the Privacy Commissioner of Canada, during a public health crisis, emergency legislation may trump privacy laws. Moreover, people already in positions of vulnerability such as financial aid recipients or refugee claimants have very little control over their own data protection and very little capacity to consent.

Importantly, data used to fight COVID-19 can be subverted for other, more nefarious purposes. The security firm FireEye has reported that personal data was used in Syria to monitor and target dissidents. In the past, data used by humanitarian organizations to distribute aid has been hacked and diverted in this way in a number of other conflict-ridden areas, including Yemen.

The information collected to address the pandemic has included health information of populations that are marginalized or even oppressed. Humanitarian organizations are having to prevent the use and misappropriation of the personal data of the most vulnerable and impoverished persons in the world by actors ranging from militaries to authoritarian regimes to amorphous hacker armies.

Moreover, the collection and use of population data can also generate risks of racial discrimination concerning vulnerable communities. It is therefore essential to protect sensitive information concerning individuals and groups.

Data used in artificial intelligence, whether it be for medical purposes or for epidemiological modelling, can be extremely sensitive with implications for personal privacy and security of individuals and groups. Given the pre-existing tendency to overlook the privacy needs and rights of residents of developing countries and the current de-prioritization of privacy legislation in the face of the pandemic response, this problem is likely to get worse.

There is no one-size-fits-all solution to the COVID-19 pandemic. However, the societal and democratic effects of artificial intelligence, used as a tool for governments at all levels, will need to be considered from the start. This kind of application is even more critical for developing countries, which have fewer resources to address the crisis and have less resilience.

A longer version of this piece can be found in Vulnerable: The Law, Policy and Ethics of COVID-19, edited by Colleen M Flood, Vanessa MacDonnell, Jane Philpott, Sophie Thériault and Sridhar Venkatapuram, out now and available open access from University of Ottawa.

This article is part of the Addressing Vulnerabilities for a More Equitable Pandemic Response special feature.

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission, or a letter to the editor. 
Céline Castets-Renard
Céline Castets-Renard is a professor in the faculty of civil law at the University of Ottawa and holds the research chair on accountable artificial intelligence in a global context.  X: @CastetsRenard
Eleonore Fournier-Tombs
Eleonore Fournier-Tombs is the head of anticipatory action and innovation at the United Nations University Centre for Policy Research and an adjunct professor in the University of Ottawa faculty of law.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License