In an era of continuous social and technological change, how can an institution like the federal government stay relevant? How can we as public servants stay up to date on what Canadians want from us? How do we react to issues in society at the pace that Canadians expect of us?
Artificial intelligence (AI) is the latest tool in a suite of technologies being employed to meet the demands of an increasingly digital population. The proliferation of these technologies has provoked a growing international dialogue about the effects of advanced data analytics and AI on society and what government needs to do to adapt to this new reality. Despite this rapidly changing environment, there has been less discussion about what it means for government to deploy these tools for its own purposes.
Data analytics and AI provide the federal government with an opportunity to redefine the ways it interacts with its diverse client base. They can help us provide Canadians with a more personalized service experience that is responsive to their needs. They provide us tools to predict threats to health and safety, or to discover new economic opportunities for Canadian businesses.
The potential is enough to excite any public servant: provide uninterrupted service to clients, respond to inquiries quickly, and to discover new trends and opportunities that humans may never have been able to do before. We can derive greater insight into the operation of the public service, and automate onerous internal processes that take too much of our time away from serving Canadians. This excitement is worldwide; in countries like Australia and Singapore, a chatbot assistant can help you file your taxes. The United States uses them to help people navigate the immigration system, or to help businesses navigate procurement rules. Here in Canada, we use AI to help predict how infectious disease outbreaks will behave and spread.
These tools could be coming on the scene at precisely the right time for government.
Worldwide, trust in public institutions is deteriorating. The Auditor-General of Canada has repeatedly criticized our service delivery to the public. High-profile IT failures in government are contrasted with ubiquitous stories of advancing technology in the private sector. The perception of the government as a competent and relevant institution is shaken.
Emerging technologies are not a clear solution to solve these governance issues, but if implemented correctly, they can help. Embracing them means that we must also embrace the disruptive change that they bring. We cannot afford to stubbornly entrench ourselves in a resistance to adapt, or we risk jeopardizing our relevancy.
Approaching the future responsibly
It goes without saying that data analytics and AI have brought considerable ethical issues to the fore. Algorithms used by the public and private sectors have already affected people’s lives in fundamental ways, such as parole decisions, loan underwriting decisions and hiring decisions. These decisions rely on data that may be collected or used with bias; for example, in 2015 a study from Carnegie Mellon discovered that compared with men, women were less likely to be shown high-paying jobs by Google’s job advertising system. If this type of algorithmic bias extended into other services and decisions, this could be highly problematic for vulnerable populations. Certain AI methods make decisions that may be difficult to interpret, which raises whether these decisions would satisfy the review of a court or tribunal.
As a result of these issues, there have been calls, most notably from the AI Now Institute at New York University, for limits to the use of AI in critical public decisions that affect health, welfare or liberty. Jurisdictions worldwide have introduced rules to ensure that their citizens are provided with the right to an explanation of the decisions that machines may make about them.
Given these issues, it is evident that AI is not a technology we should leap toward blindly; rather, we should approach it with intentional and careful steps and learn along the way. But we can’t ignore it or wait and see how the technology evolves, or ignore the potential benefits that can be realized for people. The implementation of AI in government will require an artful balance between sometimes opposing forces: innovation versus stability; experimentation versus inclusiveness; good service versus program integrity.
Our newsletter about the public service.
Nominated for a Digital Publishing Award.
Canadians must be provided with a clear path to challenging government decisions about themselves. The balance won’t be easy to reach, and it will be even more difficult to maintain. If we disclose too little information about algorithmic decisions, we risk creating an impenetrable and frustrating system where decisions about oneself are difficult to challenge. On the other hand, if we disclose too much information, the algorithms that we use for decision-making could be manipulated by those looking to defraud the government.
Changing government and governance
Responsible implementation of AI will encourage government institutions to more carefully consider how they operate this disruptive technology. It will push all public servants to become more data literate and understand new tools as they emerge. It will break down the barrier that typically separates policy and technology experts, a barrier that should never have existed. It will demand maximal collaboration among sectors and levels of government on a continual basis. It will require that institutions prioritize ethnic and gender diversity in their development teams, both as a safeguard against malicious data biases and to exploit the well-documented improvements in innovation that diversity brings.
The success or failure of AI in government will not depend on better project management, but on better management of change. Automation will reduce the need for humans to perform myriad tasks. It will also create jobs that defy categorization. Training and more flexible human resource tools will be necessary as the business of government changes. How do we attract extraordinary talent to work on such government challenges when their skills are in such high demand?
There may also be impacts that extend beyond the bureaucracy to other institutions of governance that we collectively come into contact with daily, such as legislatures, arms-length agents, courts, and political parties. Will opposition backbenchers have the tools they need to challenge a government program that is partially designed and implemented by algorithms? Will courts and tribunals be prepared to hear cases related to algorithmic bias and human rights?
As useful as AI is, its deployment cannot come at the expense of our democratic institutions. Our systems and the analysts that interpret them must be adaptable enough to serve successive governments equitably, so that we can continue to provide ministers with clear and digestible advice, and also implement their instructions without undue delay. Civil society and opposition parties need access to information that will allow them to question government effectively. It will be incumbent on the public servants working in the field to also be extraordinary communicators and unpack complex technical concepts simply, in an effort to ensure that clear and digestible information is made available about how policy decisions are reached.
There is a lot to be gained from AI, but there is also a lot at stake. That said, if we work openly, design and iterate effective rules and take thoughtful risks, the technology presents us with an opportunity to provide excellent government services. The decisions that we make now will shape the future of governance in Canada.
This article is part of the Ethical and Social Dimensions of AI special feature.
Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous réagir à cet article ? Joignez-vous aux débats d’Options politiques et soumettez-nous votre texte en suivant ces directives.