We can look at 2017 as the high-water mark for Silicon Valley — when its wave of disruption finally started to recede. The pushback was a reaction to a string of bad headlines: Kremlin-backed political interference. Rising hate speech. Open hostility to women in politics. Discriminatory ads using racist keywords. Something broke online last year, and now fewer than half of Canadians (49 percent) trust social network platforms, according to a recent survey (see slide 179).

Now Canada has to look past the initial enthusiasm for social media and develop a better approach to its governance. In radio and television, by comparison, Canada has robust processes for consultation and deliberation, administered by the Canadian Radio-television and Telecommunications Commission (CRTC). Investigations by the CRTC have brought together companies, advocacy groups and citizens to resolve issues about the content on Canadians’ screens and about standards for advertising. These issues are not so different from today’s concerns about social media.

Currently, governance of social media is industry-led by default, but this approach is proving to be untenable at a time of huge media concentration. In Canada, the Internet experience is largely controlled by two companies: Facebook and Google. Google is the most visited website in Canada, and 68 percent of Canadians use Facebook. These two companies account for nearly three-quarters of the online advertising market. Their market dominance puts Facebook and Google in an unenviable position. As they come to define the Internet experience more and more, they are increasingly becoming accountable for fixing that experience themselves.

Google and Facebook, along with Twitter, are scrambling to manage the issues associated with their platforms, as we learned when their lawyers testified before Congress in October and November 2017 about foreign interference during the last US presidential election campaign. Their testimony, as well as growing international evidence, demonstrates that these companies have struggled to contain their ill effects on democracy.

The federal election coming up in 2019 provides Canada with a deadline that should prompt us to reassess the governance of social media so far. Changes we can make as we prepare for the next election may not fix all the problems with the Internet in Canada, but they’d be a start.

Social media’s threats to public discourse

Social media governance must begin from a systems-level perspective, questioning the ways Canadians access content online, and acknowledging the specific influence of platforms in recommending content and administering advertising. The analysis here of three major concerns builds on my past work with Elizabeth Dubois on political bots (as discussed previously in Policy Options).

One key area of concern is the “discoverability” of political information: namely, how and why certain political content gets recommended on social media. Content recommendation is unfortunately a collaborative effort, as we learned in December after a false report by the French-language network TVA alleging misogynist behaviour by two Montreal mosques went viral. Users and journalists as much as platforms decide what becomes easy to find online. Platforms, however, play an important role in setting the rules of the game. Their algorithms, which are constantly changing, push some content to the top of search results, but do they promote the best content or merely the most engaging content? The most engaging content has been found to be also the most disgusting and anger-inducing. For the 2019 election, the risk is that the easiest information for voters to find about politicians will also be the most toxic, the most hostile and perhaps the most scandalous, as evidenced by the outrage following the erroneous TVA report. Although social media didn’t start the trend toward cheap, salacious political news and attack ads, its alleged power of disruption would be welcome if the platforms used it to break this habit.

So far, public shaming has been the most effective way of getting platforms to recognize the adverse effects of their content recommendation. Reports of disturbing children’s content, for example, prompted YouTube to revise its policy to restrict inappropriate use of characters from kids’ entertainment. We need to do better than this. It is in everyone’s interest to avoid scandal as our principal policy instrument.

Discoverability is a problem that platforms cannot solve alone: more Canadian institutions need to investigate and support research across these platforms. Elections Canada, journalists, parties and social media companies themselves should audit the discoverability of political content online as a proactive check in the lead-up to the election. To help, platforms need to understand that their reluctance to adequately enable research (along with their criticism of research about them) has limited the kinds of investigation and analysis that the public needs to see. Although Facebook is actively improving the digital capacity of journalists with new tools and programs, access has been withdrawn when it was used to investigate sensitive issues like foreign interference in elections.

Another reason for concern is the impact of trending, through which platforms, in effect, make explicit judgments about the relevance of information. Facebook, Twitter and Google all have trending features that recommend popular stories or events. Trends are comparable to the front page of a newspaper. Facebook, for example, includes a sidebar on its home page listing trending stories. Importantly, only English-speaking Canadians see it; when a user changes the language to French Canadian, the sidebar disappears. To those Canadians who do see trends, the benefit is marginal. I tracked what trended in Facebook for a week, along with a few students. Out of the 100 stories on my general and political trending tabs, only 10 stories were Canadian. Trends were not global either: 56 stories were about the United States. Given this American bias, what is the Facebook Trends sidebar doing in Canada? Facebook should consider removing the sidebar for news and political affairs or at least justifying its benefit to Canadians. Trends might be a vulnerability to be gamed during the election or at best a source of distraction from the important questions of the campaign.

Finally, online advertising needs better oversight during elections. Facebook advertising has been used to strategically suppress the vote in the United Kingdom by targeting ads to swing ridings and discouraging supporters from coming out on election day. We don’t know if similar ads have run in Canada. These so-called dark ads are especially problematic if they amplify racism or cultural difference. Facebook ads have been used to illegally discriminate against African Americans, using the platform’s “multicultural affinities” categories. Facebook has announced a temporary ban on any ads that exclude based on multicultural affinity, a welcome step that might need to remain in place for the 2019 election if no better solution is found for dark ads.

The inner workings of government
Keep track of who’s doing what to get federal policy made. In The Functionary.
The Functionary
Our newsletter about the public service. Nominated for a Digital Publishing Award.

Facebook and Twitter have gone farthest in bringing greater transparency for digital ads. Both have announced global initiatives to improve advertising transparency; Canada is actually the first country where Facebook’s program is being tested. Neither company, however, has endorsed the Honest Ads Act, proposed US legislation that would require companies with over 50 million monthly viewers to keep a copy of political ads on their platforms. Hopefully, online platforms will work with Elections Canada to establish an archive of election advertising and ensure targeting complies with the spirit of the Canada Elections Act.

But there is a more troubling concern about the procedures that platforms use to place advertising in front of consumers’ eyes. While the use of rubles to buy ads exemplifies the accountability problem for Facebook’s ads during the 2016 US election, Google’s withdrawal from the opioid addiction treatment market is an indicator of the advertising placement problem. Opioid addiction is a health disaster, but it is a lucrative one if you are a treatment centre recruiting clients with insurance coverage. Google ads have been one way to attract people searching for help with their addictions. Too often the ads came from fraudulent providers.

This was possible because most online ads are placed through auctions. The highest bidder wins the spot, and hopefully your attention. The rationale underlying this system is that the market is the best information processor, so the highest bid will result in the best ads. But in the case of addiction treatment centres, the auction resulted in people with serious health issues being shown ads for dubious treatment programs. Google eventually responded by banning all ads for rehab centres. Though Google continues to search for cures of its marketplace woes, we might draw an unsettling lesson from its response: perhaps the highest bidder is not the best source of information.

In all three areas — discoverability, trending and advertising — the system used to promote and recommend the topics of our public discourse online is flawed, often leading to troubling content rising to the top. Indeed, the ability of Facebook, Twitter and Google to grow so quickly through automation has created a fragile media system that the events of 2017 have shown to be easy to abuse.

The end of goodwill?

What was most disconcerting about the testimony of the online giants before Congress was the revelation of how easily their platforms were misused. Attacks alleged to be directed by the Kremlin did not use technical sophistication; they just broke social norms, picking scabs of racism and hatred. There was no secret artificial intelligence device or complex modelling, just bad actors willing to go there. Anyone following the treatment of women online should already know how uncivil conduct can destroy online community. By now, we also know that social media companies have been largely ineffective at addressing such problems.

Goodwill may have been Facebook’s or Google’s killer app. People spent years making these platforms popular, filling their sterile blue or white pages with love, joys and the fragile beauty of living. Today that goodwill seems in short supply. After years of working to build the world’s largest social network, people might be tiring of sharing so much and getting so little back. Research tends to show social media has a positive effect on political participation, but that influence might decline if 2017 has ended the goodwill toward social media in our lives.

The debate about social media and democracy is far from over, but it is evident that social media’s benefits need to be clearly stated and its flaws better acknowledged. Discoverability, trending stories and advertising all must be subject to greater public scrutiny before the next federal election. Achieving solutions will require better social media governance. Canada’s existing institutions of media policy, like the CRTC, should be a good model as we develop this governance.

Photo: Shutterstock, by percom.


Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous réagir à cet article ? Joignez-vous aux débats d’Options politiques et soumettez-nous votre texte en suivant ces directives.

Fenwick McKelvey
Fenwick McKelvey is an associate professor in the Department of Communication Studies at Concordia University. He is co-director of the Applied AI Institute and the Machine Agencies Working Group at the Milieux Institute. Twitter @mckelveyf

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License