Governments around the world are investing in artificial intelligence (AI), preparing for the changes ahead as this new technology diffuses. French President Emmanuel Macron recently announced €1.5 billion for AI research. The government of China recently announced a $2-billion AI technology park near Beijing, as one of many investments in AI. Here in Canada, our government committed $125 million to an AI strategy in March 2017, plus significant additional funding as part of the Innovation Superclusters Initiative. Are we investing enough to compete, or do we need to match the billions invested by other countries?

The answer depends on our strategy and objectives. What is our national strategy and what are our short-, medium- and long-term objectives with respect to exploiting this powerful new technology to benefit society? To answer these questions, we must first clarify the capabilities of AI. Even casual observers are aware that the technology is improving rapidly. The year 2018 may well be a milestone in artificial intelligence comparable to 1995 for the Internet. AI is poised to take off. But what, exactly, can it do?

Despite science fiction’s depictions of AI-powered humanoid robots (Blade Runner, Westworld, Ex Machina), the current advances in AI are primarily driven by a singular capability: prediction. Broadly defined, prediction is the process of taking information you have and generating information you don’t have. Prediction is useful because it helps decision-making. Better predictions mean better decisions, from medical diagnosis to hiring practices to driving.

As economists, we find it useful to think about these changes as a drop in price. The rise in AI technology can be recast as a drop in the quality-adjusted price of prediction, as outlined in our new book, Prediction Machines: The Simple Economics of Artificial Intelligence. This is a useful perspective because it enables us to use well-tested economic tools for anticipating what will happen as the technology advances and the price drops. When the price of something falls, three things happen. Understanding these three things is key to creating a framework for AI policy.

First, when the price of something falls, we use more of it. When coffee becomes cheaper, we drink more coffee. When prediction becomes cheaper, we do more prediction. Effective policy will reduce regulatory barriers that unnecessarily hinder the increased use of prediction while at the same time maintaining or strengthening regulations necessary to protect people and facilitate efficient markets. We need to invest in training scientists to take advantage of these opportunities. For example, the University of Toronto’s Vector Institute aims to train hundreds of AI scientists every year, as does the Montreal Institute for Learning Algorithms (Université de Montréal and McGill University) and the Alberta Machine Intelligence Institute (University of Alberta), among other Canadian institutions.

Second, when the price of something falls, we use less of certain other things. When coffee becomes cheaper, we buy less tea. When machine prediction becomes cheaper, we do less human prediction. This is where AI policy meets social policy. While AI will enhance the productivity of the economy overall, increasing incomes on average, it will not benefit everyone equally. We must begin work now on debating policy options designed to help those who will lose their jobs. A compelling AI policy for Canada will include a strong social safety net. It will also include flexible training programs that are able to adapt to teaching the new skill sets that will become important as intelligent machines take on increasingly significant parts of the economy.

The inner workings of government
Keep track of who’s doing what to get federal policy made. In The Functionary.
The Functionary
Our newsletter about the public service. Nominated for a Digital Publishing Award.

Third, when the price of something falls, we use more of certain other things. When coffee becomes cheaper, we buy more cream and sugar. What are the “cream and sugar” for machine prediction? For policy, one that’s particularly important is data. AI policy must facilitate access to data for Canadian scientists and corporations. As a result, privacy policy needs to be strict enough that consumers are comfortable providing their data to companies, but not so strict that the data cannot be used to effectively train and operate AIs. A well-designed policy that strikes the right balance could give Canada a meaningful advantage.

The majority of media attention to date has focused on advances in AI technology. It’s time to turn our attention to advances in AI policy. Advances in AI technology mean that prediction becomes cheaper. Advances in AI policy mean that we begin an inclusive, broad-based discussion among Canadians with the objective of making progress toward deciding what side of many trade-offs we want to take as a society, in the context of the rapidly falling cost of prediction. Overall, the goal should be to position ourselves to take advantage of cheaper prediction in a manner that ensures nobody is left behind.

This article is part of the Ethical and Social Dimensions of AI special feature.

Photo: Shutterstock, by whiteMocca.


Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous réagir à cet article ? Joignez-vous aux débats d’Options politiques et soumettez-nous votre texte en suivant ces directives.

Ajay Agrawal
Ajay Agrawal is a professor of strategic management and Geoffrey Taber Chair in Entrepreneurship and Innovation at the Rotman School of Management. He is founder of the Creative Destruction Lab, co-founder of Next 36, Next AI and Sanctuary, an AI/robotics company. Twitter @professor_ajay
Joshua Gans
Joshua Gans is Jeffrey S. Skoll Chair of Technical Innovation and Entrepreneurship, and a professor of strategic management at Rotman. He is chief economist of the Creative Destruction Lab, department editor (strategy) at Management Science, and co-founder of Core Economic Research. Twitter @joshgans
Avi Goldfarb
Avi Goldfarb is Rotman Chair in AI and Healthcare and a professor of marketing. He is chief data scientist at the Creative Destruction Lab and holds other academic and research posts. Avi conducts research on privacy and the economics of technology. Twitter @avicgoldfarb

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License