The minister for democratic institutions recently announced coordinated efforts to safeguard elections from online threats. The announcement highlighted concerns that social media platforms could be used to spread disinformation and exploit social tensions, and called for these companies to take âconcrete actions to increase transparency, authenticity and integrity of their systems to help safeguard our election.â Still, no concrete regulatory action has yet been taken that addresses how large social media platforms deal with harmful content.
Public policy has largely left these companies alone to moderate the content that individuals and organizations put online even though most Canadians are online for hours every day. We call this work âcontent moderation,â and it is an enormous challenge â as vexing as any media policy issue facing Canada.
Moderation involves making decisions about millions of posts all around the world every day, balancing between protecting individual rights and ensuring the overall welfare of the system. The work is done behind the screen, rarely visible to users.
Canada needs an institution for content moderation. Not to moderate the actual discussions, but to improve transparency and fairness in the development and implementation of content moderation rules, including those that affect Canadiansâ democratic participation.
We propose a moderation standards council as one of several measures to address these issues in a recent report published by the Public Policy Forum, Poisoning Democracy: How Canada Can Address Harmful Speech Online. At the request of the federal government, the council would bring together companies, civil society and governments to protect and improve free expression online.
Moderation in all things digital
Content moderation is an enduring question in online communication. Early internet communities, like USENET and chat rooms, needed codes of conduct and moderation guidelines to balance being open to anyone yet resilient to abusive and destructive participants. These problems have gone mainstream. Todayâs major social media platforms make and implement rules regarding their usersâ content, from Facebook’s Community Standards, to YouTube’s Community Guidelines to the Twitter Rules.
The platforms moderate content in obvious and hidden ways. Algorithms do much of the work, automatically filtering content or flagging it for review. Simultaneously, tens of thousands of people, part of Silicon Valleyâs shadow workforce, review content that algorithms or users have tagged as problematic. Humans and machines work to distinguish and delete content determined to be against their terms of service, community standards, or potentially illegal in particular jurisdictions, like child pornography.
Social media companies are now more transparent about content moderation, but there is still much we donât know about how these rules are made or implemented, or what their effects are. For instance, Tumblerâs recent ban on pornography, made without consultation with users, threatens to erase the LGBTQ youth communities that formed on the platform.
Harmful speech and poor moderation
Poor moderation of speech that someone deems harmful can undermine opportunities for free, full and fair participation in online debates by all Canadians. Harmful speech is a broad term that refers to a range of communication, sometimes illegal like hate propaganda, the contents of which marginalizes and harms others as well as normalizes toxicity. In a rapidly evolving online world, the forms of and forums for harmful speech are changing all the time.
Unaddressed abuses of content moderation policies pose obstacles to political participation by women, racial and cultural minorities and other targeted groups. Women in politics endure increasing online harassment and misogyny, such as MLA Sandra Jansen in Alberta and Councillor Kristen Wong-Tam in Toronto. Candidates feel threatened, and staff become overwhelmed dealing with the harmful feedback.
These are the known cases, but probably only a small sample of the problem. As many as one-quarter of Canadians have experienced online harassment. Without greater oversight of content moderation, we will not know the true extent of the problem.
Canada can take a leading role to address harmful speech by convening a Moderation Standards Council through the Canadian Radio-television and Telecommunications Commission (CRTC). The council can be created with more or less government involvement, as we describe in our Poisoning Democracy report, to address harmful speech and content moderation. The European Union has started a similar process involving a code of practice for disinformation.
But social media companies will fight any regulation, right?
Not necessarily.
The major social media companies recognize that some government regulation of their content is inevitable, and perhaps even desirable. In a November 2018, Facebook post, A Blueprint for Content Governance and Enforcement, Mark Zuckerberg wrote that âAt the end of the day, services must respect local content laws, and I think everyone would benefit from greater clarity on how local governments expect content moderation to work in their countries.â In the same post he proposed the creation of a global organization, independent from Facebook, that would function as a high-level appeals process and would advise the companyâs policy teams on acceptable speech on the platform and provide an appeals process. Zuckerbergâs proposal remains in development, with questions by some experts about the independence, impact and authority of the proposed organization unresolved.
Facebook or other large platforms cannot fix the internet alone. Lack of coordination between sites creates confusion among users and opportunities to pit platforms against each other. Alternative websites such as Voat and Gab â self-described free speech havens â rose to popularity after many of their users were removed for harmful speech from other platforms. Lagging behind or purposefully ignoring the issues, these sites gave domestic terrorists a safe space to share their conspiracy theories and violent fantasies that too often have been enacted offline.
Content moderation requires coordination and cooperation across platforms. A moderation standards council would bring together platforms with common features â photo-sharing, micro-blogging or social-networking â to create best practices and signal leaders in the field.
Harmful speech as catalyst
Harmful speech isnât the only problem for a moderation council to tackle. It could also help platforms address concerns about disinformation. Under its mandate, the council could establish clear definitions for harmful speech, working toward shared standards. Often users donât know or donât understand the moderation process. Clear definitions could help promote media literacy and ultimately improve usersâ understanding of moderation across platforms.
The council would encourage companies to share best practices on moderator training, community standards, and which moderation tools work to address harmful speech and which do not. These best practices need not only aim to alleviate harmful speech but could also seek to help platforms support better democratic conversations and more inclusive spaces.
The council should encourage transparency by platforms, and facilitate comparisons of their efforts. How have different platforms tried to address the issue of harmful speech? Which lead? Which exacerbate the issue? And to help the industry as a whole, a council could help leaders, civil society and governments decide how to establish fines or other penalties for platforms that do not meet expectations.
Liability and accountability
Canada might well have to address content moderation soon. The US-Mexico-Canada Agreement (USMCA) will impact our approach to moderation. It exempts platforms from liability for hosted content that might be libellous, infringing or harmful. The clause exports the American position, a hands-off approach, to Canada. The clause does encourage companies to be open with the public and moderate in good faith, but its implementation in Canada requires more nuance for harmful speech.
As Canada meets its obligations under USMCA, platform liability exemptions could be made contingent on companies of a certain size, age or revenue participating in good faith in the council.
One bold path forward would be to have the CRTC mandate companies to create this council, a co-regulation approach similar to the Broadcasting Standards Council. The CRTC would mandate the work of the standards council, and set specific binding commitments to improve the transparency and accountability of content moderation.
Ultimately, Canadians and social media companies would benefit from an institution that improves the transparency, effectiveness and public understanding of content moderation policies. In doing so, a moderation standards council could help address concerns raised by the minister of democratic institutions and others about the impact of social media platforms on Canadian democracy.
Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous rĂ©agir Ă cet article ? Joignez-vous aux dĂ©bats dâOptions politiques et soumettez-nous votre texte en suivant ces directives.