As automated tools become increasingly common in the digital landscape, we need more transparent systems to identify bot accounts and ensure they are not causing harm to society. But calls for blanket bans and purges are overblown.

Demands for regulation or bans on social media bots have littered the opinion pages of respected news outlets over the past two years in the wake of the 2016 presidential election and the Brexit referendum. Prominent pundits like Thomas Friedman and many researchers have blamed social media for undermining democracy. Some attribute Hillary Clinton’s loss to the effectiveness of foreign-operated bots that helped spread misinformation and disinformation.

Platforms have responded to these calls for purges. Both Facebook and Twitter have deleted   accounts en masse, without clarifying whether the deleted accounts were social media bots, accounts set up by real people using pseudonyms or false accounts.

These are Band-Aid measures. We agree that fake accounts purporting to be humans in order to spread fake news and foment unrest should be addressed. But social media bots are different from fake accounts, and the idea of banning all bots is misguided.

For one thing, banning bots would not tackle the underlying problem of micro-targeted advertising, filter bubbles and social media algorithms trained to feed us partisan information because of our own confirmation bias. It would not address the contributions of humans to spreading disinformation. Studies have shown that factually false news reports are shared more widely and garner greater engagement than those based on fact.

Secondly, advocates of bot bans see the issue purely through a political lens and ignore the tremendous opportunities that online bots provide for society generally. Driven by rapid advancements in machine learning and natural language processing, bots have gone from conducting repetitive tasks online to undertaking sophisticated operations that even humans struggle with.

Bots can monitor a vast number of indicators and sound the alarm when they’re programmed to do so. Bots are helping provide product support, spreading government warnings and even leading anti-corruption movements. For example, a bot called Rosie (@rosiedaserenata) automatically tweets about irregular expenses of members of the Brazilian congress after analyzing open data.

The inner workings of government
Keep track of who’s doing what to get federal policy made. In The Functionary.
The Functionary
Our newsletter about the public service. Nominated for a Digital Publishing Award.

So how do we minimize the harmfulness of bots without limiting their benefits?

According to our analysis of various platforms, Wikipedia’s bot policy is the most effective at minimizing risks without sacrificing functionality. It stipulates that bots, which are used to correct spelling and add metatags, “must be harmless and useful, have approval, use separate user accounts, and be operated responsibly.” Wikipedia bots can go live only once their application has been approved by the platform, and they are registered publicly online.

In addition, platforms should clearly identify which accounts are bots, particularly those verified as harmless. This would help build trust and allow for harmful bot accounts to be identified and deleted and for useful bots to be saved from purges. Widespread concern over the new Google Assistant’s ability to accurately impersonate humans has further highlighted the need for programs to self-identify as bots.

The key is transparency. Easily identifiable, registered and approved bots will help ensure a healthy digital ecosystem. A heavy-handed political response not only will jeopardize the future functionality of useful bots but will likely prove ineffective.

Illustration: Shutterstock, by Zapp2Photo.


Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous réagir à cet article ? Joignez-vous aux débats d’Options politiques et soumettez-nous votre texte en suivant ces directives.

Arjun Bisen
Arjun Bisen is an Australian Fulbright Scholar pursuing a master’s degree in public policy at Harvard Kennedy School, where he has focused on the intersection of technology and foreign policy.
Yasodara Córdova
Yasodara Córdova is Digital Harvard Kennedy School fellow and a Berkman Klein Center affiliate. She worked for the World Wide Web as a Web specialist and for the UN as a technical consultant in innovation.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License

More like this