The year 2016 will be a memorable one for artificial intelligence (AI). First, there were important breakthroughs: companies like Google, Mercedes-Benz and Toyota are now building self-driving cars. As well, we now have software that can administer psychotherapy or write a press release. Deep-learning approaches using systems similar to the neural networks in the human brain have produced systems that can perform facial recognition as accurately as humans can, detect lung cancers using X-ray images and learn Chinese in two weeks, among other tasks.
Second, and perhaps more importantly, AI has moved into the commercial field, and the pace of new developments has accelerated. The combination of these two occurrences has increased the conviction among people in the field that strong AI — systems with human-level or above-human-level intelligence — is not just a distant possibility.
As an indication of the progress of AI in the last few years, consider the impressive evolution of the capacity of computer programs to beat humans at games. In the early 1960s, an IBM researcher designed a program that learned to play checkers by playing against itself. By 1962, the program was able to beat the New Jersey checkers state champion. Forty years later, in 1997, IBM’s Big Blue supercomputer defeated Gary Kasparov, the world chess champion at the time. In 2011, another IBM computer called Watson beat Jeopardy’s two best players. Jeopardy requires a good understanding of the English language and the ability to be witty. It is much more difficult to program a computer to win at Jeopardy than to win at checkers or chess.
Now, in 2016, Alpha-Go, a system developed by DeepMind — a British artificial intelligence company acquired by Google — beat one of the best Go players in the world. Go is played by placing black or white stones on a 19×19 board with the aim of occupying the most territory. The game’s complexity, which far exceeds that of chess, makes it impossible to play with a brute-force calculation approach. A victory at Go has long been considered a grand challenge in AI.
All of this suggests we are on the brink of fundamental and irreversible social change. The automation of transport could transform urban organization. Robots could replace many human jobs. While AI could help us make important scientific discoveries, it could also be exploited for military or other destructive purposes.
There is strong empirical evidence that these changes will be significant. Morgan Stanley suggests, for instance, that the automation of transport could save the US economy $1.3 trillion (8 percent of its annual GDP). This is not counting many other nonmonetary benefits, such as those associated with an improved urban environment: more green spaces, fewer vehicles on the road (automated vehicles can be shared more easily across a pool of users), less pollution, reduced traffic congestion, safer streets, among others.
It is troubling, however, that not much is being done in order to prepare for the challenges of these transformations. Here are four ways in which we could better prepare.
While those in the AI business know of its potential short- or long-term impacts, this is not true for the general public, political decision-makers or business people. When we in the field discuss AI and robotics, we are often met with skepticism and incredulity; some people maintain that these technologies are the domain of futurologists.
It is true that thinking about the impact of technologies that are not yet widely adopted or even that do not exist is futuristic. But this does not mean that we should not think about those impacts, quite the contrary. As a first goal for proper preparedness, the population should know and understand the extent of the changes this technology could bring. Governments should increase consultation with experts. The media should invest energy into disseminating more information about AI, in spite of the challenges involved in covering it. People should suspend their disbelief and give serious thought to the prospect that the future might look nothing like the past. One can hardly prepare for important changes if one dismisses the possibility that change will come.
Increase research in the humanities and the social sciences
While significant efforts are being made to improve AI technology and create more functional robots, in the academic and business sectors there is still a remarkable paucity of research on the political, economic and ethical dimensions of these technologies. What will be the economic impact of strong AI and the robot revolution? What kinds of risks does AI represent if it is used for military purposes? Should we worry about our increasing reliance on proprietary technologies and the rise of multinational technological corporations? What ethical issues are still under the radar?
Although AI and robotics are technical subjects, researchers in the social sciences and humanities should be thinking about their broad social implications, and they should be establishing research collaborations with colleagues working in the AI field.
One interesting initiative in this respect is the Future of Humanity Institute, at Oxford University. This research centre brings together researchers from all disciplines — including philosophy (its director, Nick Bostrom, is a philosopher by training) and the humanities. There should be more initiatives like this in Canada and abroad. Early movers may gain great advantage in the near future from the knowledge produced by these centres, through increased international leadership, economic competitiveness and preparedness.
Prepare for a shrinking labour market
Recent technological developments are likely to bring about important changes in the labour market. If today’s machines can write press releases, administer therapy or make medical diagnoses, in the future we might expect them to accomplish tasks that are even more complex and displace not only low-skilled workers but also high-skilled workers.
In one sense, these changes might be positive. At a macro-economic level, more machines doing human work will mean reduced labour costs and more efficient economic production. But there may well be serious disruptions in the labour market or, to put it bluntly, many jobs may be lost.
How much labour market shrinkage can we expect? Will the lost jobs eventually be replaced? Some researchers have suggested that nearly half of all jobs (47 percent) in the US will be automated. This estimate is very speculative, but the fact that we do not know the extent of the potential changes should not prevent us from being proactive.
How should Canada and other industrialized countries react in the face of this potential labour shrinkage? First, they should continue to support the development of a highly skilled and educated labour force. But this will take us only so far. It is based on the assumption that jobs in the traditional sectors of the economy (the production of raw materials, manufacturing, fishing, cab driving, and so on) will be replaced first, whereas it is not yet clear where machines will be more cost-effective than humans. For instance, working as a busperson is among the most difficult for a robot since it involves quick and fine motor skills, adapting to a changing environment and the ability to respond to social cues.
Second, we should seriously consider the possibility that the lost jobs won’t be replaced. And this may be bad news, especially given the current trend of rising economic inequality. Even if both high-skilled and low-skilled jobs may be replaced, it is likely that people at the lower end of the economic scale will be more severely affected and may find it more difficult to find new employment. As well, the prospect of fewer jobs in a more productive economy is likely to reduce the share of labour income in favour of capital. The overall flow of money paid for human work will decrease relative to the stock of buildings, machinery, proprietary software, patents, and computer systems. This will increase the wealth of already-wealthy capital owners.
If there is technological unemployment, if the lost jobs are not replaced, and if economic inequality keeps rising, it is possible that a much higher level of economic redistribution will be needed in order to avoid a return to the kind of economic inequality that existed in the past. Measures such as increasing taxation of income and capital, even providing a basic income, might become necessary.
Ensure AI is safe
One of the oldest preoccupations about the social impact of AI is the concern for human safety: the fear that AI technologies, when they reach a certain level of intelligence, may escape human control and somehow harm humans. The risks range from the malfunctioning of automated vehicles like drones or self-driving cars, the creation of destructive military technologies, and even to the extinction of humanity itself.
But it is difficult to assess the future risks of AI and prepare for it adequately. How can one evaluate the risks of a technology that does not yet exist and whose resources may be far superior to those of humans? Perhaps this explains why few tangible proposals for ensuring AI is safe have been proposed so far, despite the clear expression of concern by people in the field.
The threat that intelligent machines will harm us should not cloud our judgment, but it should not be overlooked either. While we think about the best ways to keep AI safe, we must not forget about the three goals mentioned above. And even if AI safety measures have yet to be put on the table, we ought at least to make sure that we are able to monitor and regulate their development.
Dries Buytaert, the creator of Drupal, an open-source Web publishing system, has suggested recently that we should create in the US the equivalent of the US Food and Drug Administration for data and algorithms. The agency would oversee companies’ software algorithms and the way they use big data. If more governmental oversight of big data is needed, what about the use of AI technologies? New governmental agencies could be created to keep public organizations up to date on what AI technologies are able to do, what they are being used for, and to establish rules and regulations.
We have outlined four goals for better preparedness: increase awareness, increase research in the humanities and the social sciences, prepare for a shrinking labour market, and ensure AI is safe. This list is by no means exhaustive, yet it suggests considerable efforts must be made. While various actors must contribute to the preparation effort, the federal and provincial governments, and their public sectors, have a central role to play. Public funding may be necessary to improve research on AI. Ottawa and the provinces may have to implement redistributive measures as well as employment policies.
Finally, the use of AI technologies must be adequately monitored and regulated. The development of AI and robotics may well represent positive change and enable improvement, but if we fail to prepare properly or do not prepare at all, it could result in significant drawbacks.
Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous réagir à cet article ? Joignez-vous aux débats d’Options politiques et soumettez-nous votre texte en suivant ces directives.