(Version française disponible ici)

When she was just 18, Australian Noelle Martin found falsified sexually explicit images of herself on the internet, crudely created using photographs taken from her social media accounts. While the images were not real, they nonetheless caused her deep and irreparable harm.

Years later, she experienced a similar violation. However, this time her likeness was imposed on an eerily realistic pornographic video, created using a form of generative artificial intelligence (AI) technology known as “deepfakes.”

This technology produces fake audiovisual content of an identifiable person using their biometric data.

Tools to do this have advanced rapidly, become widely accessible, and use data that is readily available within one’s social network. A handful of photos and a dozen seconds of voice clips are often sufficient to reproduce a person’s likeness with striking realism.

Deepfakes are used mostly to attack, harass and harm women. Tech giants such as Google have taken some action to date. Updated telecom regulations can play a part. But Canada also needs urgent changes in its legal and regulatory frameworks to offer remedies for those already affected and protection against future abuses.

It is becoming increasingly difficult to distinguish fakes from real footage as this technology advances, particularly as it is simultaneously becoming cheaper and more accessible to the public. Although the technology may have legitimate applications in media production, malicious use, including the production of deepfake pornography, is alarming.

For example, AI-generated fake nude photos of singer Taylor Swift recently flooded the internet. Her fans rallied to force X, formerly Twitter, and other sites to take them down but not before they had been viewed millions of times.

While the incident shows that any woman can be targeted, it also illustrates a serious problem faced by non-celebrities: They don’t have that kind of massive public support to fight and identify the fakes. While private individuals may not be targeted as frequently as celebrities, they are still vulnerable.

Recognizing the gendered harm of deepfake pornography

Deepfake pornography is a form of non-consensual intimate image distribution (NCIID) often colloquially known as “revenge porn,” when the person sharing or providing the images is a former intimate partner.

Recent advances in digital technology have facilitated the proliferation of NCIID at an unprecedented scale.

As noted in a ruling in the Quebec court case of R v. A.B., with “a mere swipe on a smartphone’s screen, one can immediately become a director, producer, cameraman, and sometimes an actor in an explicit short film.”

As well, an Alberta court noted in the case of R v. Haines-Matthews: “Once the electronic images are transmitted, there is very little, if any, control over who may access them, where they may end up, or how long they will be accessible on some internet site.”

Following concerted advocacy efforts, many countries have passed statutory legislation to hold perpetrators liable for NCIID and provide recourse for victims. For example, Canada criminalized the distribution of NCIID in 2015 and many of the provinces followed suit.

Up to 95 per cent of all deepfakes are pornographic and almost exclusively target women. Deepfake applications, including DeepNude in 2019 and a Telegram bot in 2020, were designed specifically to “digitally undress” pictures of women.

Viewing the evolution of deepfake technology through this lens shows the gender-based violence it perpetuates and amplifies. The potential harm to women’s fundamental rights and freedoms is significant, particularly for public figures.

For example, Rana Ayyub, a journalist in India, became the target of a deepfake NCIID scheme in response to her efforts to report on government corruption.

AI technology was used to graft her face onto a pornographic video, then distribute it. The artificial nature of these images did little to mitigate the harm caused to her reputation and career. She faced widespread social and professional backlash, which compelled her to move and pause her work temporarily.

The harm caused to any victim by NCIID is significant. “Reputations are ruined, self-esteem is shattered, feelings are hurt, and privacy is irreparably violated,” the Quebec court ruling noted.

Deepfake pornography inflicts emotional, societal and reputational harm, as Martin and Ayyub discovered. The primary concern isn’t just the intimate nature of these images, but the fact that they can tarnish the person’s public reputation and threaten their safety.

The rapid and potentially rampant distribution of such images poses a grave and irreparable violation of an individual’s dignity and rights.

That means the same justification exists for government intervention in cases of deepfake pornography as other forms of NCIID that are currently regulated.

Arguably, the threat posed by deepfake pornography to women’s freedoms is greater than previous forms of NCIID. Deepfakes have the potential to rewrite the terms of their participation in public life.

Will Bill S-210 prevent minors from accessing pornography?

The time for a law on artificial intelligence has come

While the use of falsified NCIID to oppress women is not new – photoshopped pornography has existed for decades and non-consensual intimate artworks existed in prior centuries – deepfake technology provides a readily accessible medium that is more realistic and more visceral than any that predate it.

Unlike authentic images or recordings, which can be protected from malicious actors – albeit imperfectly because there are always hacks and leaks – there is little that people can do to protect themselves against deepfakes.

The personal data required to create deepfakes can easily be scraped by anyone through online social networks. In our increasingly digitized world, it is near-impossible for individuals to participate fully in society while guaranteeing the privacy of their personal data.

Since individuals – specifically women – have limited power to protect themselves from malicious deepfakes, there is an even stronger impetus for regulatory action.

Legal remedies and deterrence

The legal system is poorly positioned to effectively address most forms of cybercrime and only a limited number of NCIID cases ever make it to court. Despite these challenges, legislative action remains crucial because there is no precedent in Canada establishing the legal remedies available to victims of deepfakes.

Civil actions in torts such as the appropriation of personality may provide one remedy for victims. Multiple statutes could theoretically apply, such as criminal provisions relating to defamation or libel as well as copyright or privacy legislation.

However, the nature of deepfake technology makes litigation more difficult than other forms of NCIID. Unlike real recordings or photographs, deepfakes cannot be linked to a specific time and place. In many cases, it is practically impossible to determine their origin or the person(s) who produced or distributed them.

The technology underlying deepfakes is also difficult to ban because while specific apps may be removed, their code remains in open-source domains.

In addition, even before recent developments in artificial intelligence, digital technology challenged the regulatory power of bodies such as the Canadian Radio-television and Telecommunications Commission (CRTC) charged with administering the 1991 Broadcasting Act.

Whereas radio and television have finite broadcasting capacity with a limited number of frequencies or channels, the internet does not. Anyone with a smartphone can immediately become a broadcaster. Consequently, it becomes impossible to monitor and regulate the distribution of content to the degree that regulators such as the CRTC have exercised in the past.

One of the most practical forms of recourse for victims may not come from the legal system at all.

Major tech platforms such as Google are already taking steps to address deepfake porn and other forms of NCIID. Google has created a policy for “involuntary synthetic pornographic imagery” enabling individuals to ask the tech giant to block online results displaying them in compromising situations.

However, public regulatory bodies such as the CRTC also have a role to play. They can and should be exercising their regulatory discretion to work with major tech platforms to ensure they have effective policies that comply with core ethical requirements and to hold them accountable.

Deepfakes, like other digital technology before them, have fundamentally changed the media landscape.

This unavoidable disruption demands an evolution in legal and regulatory frameworks to offer various remedies for those affected. Deepfakes particularly threaten public domain participation, with women disproportionately suffering. It’s crucial to ensure that effective remedies are available.

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission, or a letter to the editor. 
Shona Moreau
Shona Moreau recently graduated from McGill University's faculty of law. Her research focuses on security, political economy, human rights and administrative law.
Chloe Rourke
Chloe Rourke recently graduated from McGill University's faculty of law. She is interested in human rights, employment and labour, environmental and competition law.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License