Last week, somewhere deep in the labyrinth of the Carleton University security system in Ottawa, someone hit the wrong key at the wrong time. We all do this. But more often than not, it’s just that we forget to include a promised attachment. A bit embarrassing, sure, but hardly life threatening.

On January 28, however, that mistaken keystroke sent out a notice warning of an “ACTIVE ATTACKER ON THE CARLETON UNIVERSITY CAMPUS” to students, staff and faculty. Those responsible tried their best to retract the warning and reassure the university community, sending a revised message within 20 minutes, but despite those efforts:

  • many of those who received the warning experienced genuine fear and anxiety;
  • many of those who received the warning simply shrugged it off;
  • many didn’t even receive the warning, and so were only picking up bits and pieces from those who did — or from the local media, which seemed better informed than anyone; and
  • many didn’t receive the retraction immediately — indeed, some did not get the original false alarm message until 2:58 a.m. the next day, a delay that is hardly conducive to the rapid response an “active attacker” event demands.

In the aftermath, the university will hold a review of its warning systems, which is never a bad thing. But it comes with its own set of risks. In the wake of an embarrassing event, organizations retreat to what they know: process, decision review systems, safeguards. An enhanced set of approvals, for example, could prevent similar errors in the future, but could also hobble a prompt and effective response to an actual threat.

Carleton is hardly alone in trying to take advantage of the communication power of smart phones, social media apps and the like, while wrestling with the myriad of potential unintended consequences. Two recent incidents highlight many of the same challenges.

In January 2018, as in the Carleton incident, the wrong key was pressed at the wrong time, resulting in an official in Hawaii sending out the following: “BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.” Amid the political tensions between the US and North Korea, the level of fear that the notification generated among Hawaiians was predictable and understandable. The event at least proved that the system could alert those at potential risk. But of greater concern was the delay in issuing the retraction because — in a circumstance many of us can relate to — the governor had forgotten his Twitter account password.

When the Ottawa region was threatened by tornadoes in September 2018,  emergency response authorities had a strong cellphone-based communication plan in place. Then the cellphone infrastructure collapsed, rendering many assumed communication channels useless. However, in a “new technology be damned” moment, those with battery-powered radios were able to get the latest information.

So what can we learn? First, the very alert systems we so easily criticize are actually crucial steps forward in allowing organizations to get risk-related information to those who need it. Some level of failure does not justify abandoning these systems.

Second, reassuring messages are not what those at risk or perceived risk want or need. They are looking for practical guidance, and in a complex, rapidly evolving emergency, that can be very hard to do with confidence. Uncertainty management is the challenge.

But finally, and most important, we need to understand that for all the conceptual promise, no technologically based, social-media-driven, stand-alone emergency response communication model will ever work as hoped. Effective and efficient response demands adaptation, flexibility and creativity.

Warning communication systems are by their nature hit-and-miss affairs. From wartime radio messages to TV-based alerting to social media and new technologies, coverage is never complete, message comprehension is always in question and — of course — interest and concern among those affected will vary.

In the case of threats on university campuses, however, there exists an incredible communication resource that should be exploited. Those who have had the privilege to speak to students can’t help but be struck by the connection they have to the online and social media worlds. At one of my first lectures, something I said ended up on Twitter before the mid-session break.

This is a resource and an infrastructure that organizations must take advantage of. Alerts sent to a university community should include specific direction telling the recipients to engage their own networks. Authorities can’t respond to a serious threat on their own; indeed, they need everyone affected to co-manage the challenge.

For administrators, the idea of transferring control can be threatening, and not without its complications. But emergency response experience around the globe repeatedly confirms that marshalling all available resources toward the management of a threat not only is the best way to respond, it may be the only way to respond.

The benefits are multiple:

  • reach will be amplified by tapping into the broad range of existing student networks;
  • impact will be enhanced because messages from trusted peers can have profound impact; and
  • the community will be brought together, shifting from a top-down model of “do this” to a collaborative model of “we can do this.”

But let’s also be clear on the risks:

  • organizational liability — if incorrect information is shared;
  • organizational criticism — if communication mistakes are made;
  • widespread confusion, exaggerated by the multiple information sources; and
  • loss of control (organizational leaders hate losing control).

It’s worth debating these benefits and risks; good ideas are often messy in the beginning. But the key in thinking through the policy options is to not lose sight of the strategic goals. One goal is to inform those at risk and those who care about them by providing the information they need. Another is to use new communication technology to serve the response. Above all, any learning institution must assume its ethical obligation and protect its staff, its faculty and, most important, its students.

Photo: Shutterstock /

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous rĂ©agir Ă  cet article ? Joignez-vous aux dĂ©bats d’Options politiques et soumettez-nous votre texte en suivant ces directives.

John Rainford
John Rainford is the Director of The Warning Project. He is the former Director, Emergency and Risk Communications for Health Canada and Global Project Lead, Risk Communication Capacity Building for the World Health Organization.

Vous pouvez reproduire cet article d’Options politiques en ligne ou dans un pĂ©riodique imprimĂ©, sous licence Creative Commons Attribution.

Creative Commons License