{"id":269428,"date":"2021-05-12T10:31:39","date_gmt":"2021-05-12T14:31:39","guid":{"rendered":"https:\/\/policyoptions.irpp.org\/issues\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/"},"modified":"2025-10-07T23:31:33","modified_gmt":"2025-10-08T03:31:33","slug":"its-time-for-a-public-safety-conversation-about-artificial-intelligence","status":"publish","type":"issues","link":"https:\/\/policyoptions.irpp.org\/fr\/2021\/05\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/","title":{"rendered":"It\u2019s time for a public-safety conversation about artificial intelligence"},"content":{"rendered":"<p>A manager hires a new employee, and offers to pay her $1,000 a day. She replies: \u201cI\u2019ll do you one better. Why don\u2019t you pay me one penny on my first day, and double my pay every day from there until the month is over?\u201d Sensing a bargain, the manager agrees. By the end of the month, he\u2019s out over $10 million.<\/p>\n<p>Such is the price of failing to respect exponential growth.<\/p>\n<p>The exponential progress of technology has a history of catching policy-makers off-guard. Normally, this isn\u2019t a huge deal \u2013 policy has always lagged behind technology to some degree. But when that technology creates serious public safety vulnerabilities \u2013 as with nuclear and biotechnology \u2013 gaps in policy cannot be ignored.<\/p>\n<p>That\u2019s why we ought to pay much more attention to progress in artificial intelligence (AI). Recent breakthroughs in deep learning \u2013 a kind of AI inspired by the structure of the brain \u2013 have placed the capabilities of AI on one of the sharpest exponential trajectories in the history of technology. The implications for policy-makers are as stark as they are underappreciated.<\/p>\n<p>Deep-learning systems are made up of components called parameters, which are modelled after the synapses that connect neurons in our brains. The larger a deep-learning system is \u2013 the more synapses or parameters it contains \u2013 the more processing power and data are needed to train it to perform a given task. But processing power is expensive, so only large firms such as Microsoft and Google have experimented with the largest of these AI systems.<\/p>\n<p>Today, most deep-learning systems can only perform narrow tasks, like tagging faces in an image or guessing which advertisements a user is likely to click on. But the holy grail of machine learning is to find a way to go further and build AI systems that can solve general problems and reason more like humans.<\/p>\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2001.08361.pdf\">Recent work<\/a> by various AI labs suggests that simply building larger AI systems may result in a leap toward that milestone. If this pans out, AI may be poised to become one of the most pressing risks to government and society over the coming decades \u2013 or even years.<\/p>\n<p><strong>An AI inflection point<\/strong><\/p>\n<p>In 2020, OpenAI \u2013 a world-leading, Microsoft-backed AI lab \u2013 announced that it had scaled up a very large AI system using unprecedented amounts of data and computing power. The AI that resulted from this experiment was called <a href=\"https:\/\/arxiv.org\/abs\/2005.14165\">GPT-3<\/a>.<\/p>\n<p>GPT-3\u2019s generalization abilities go well beyond anything previously seen. While it was trained merely to carry out a simple text autocomplete task (like the one performed by our phones when we write messages), it turns out to be capable of <a href=\"https:\/\/twitter.com\/IntuitMachine\/status\/1289862242100338690\">language translation<\/a>, <a href=\"https:\/\/twitter.com\/NimaRoohiS\/status\/1283981601697869824\">coding<\/a>, <a href=\"https:\/\/twitter.com\/zebulgar\/status\/1283927560435326976\">essay writing<\/a>, <a href=\"https:\/\/twitter.com\/nottombrown\/status\/1266188692311339008\">question answering<\/a> and many other tasks \u2013 without having to be adapted for them. Before GPT-3, each of these capabilities would have required a separate, task-specific AI system, which would have taken months and great expense to develop.<\/p>\n<p>Systems like GPT-3 represent a fork in the road \u2013 the beginning of an era of powerful, general-purpose AI in which individual AI systems with a wide range of capabilities could transform entire sectors of the economy.<\/p>\n<p>Industry has taken note, and a feverish race to build larger AI systems is now underway: AI systems are now being scaled <a href=\"https:\/\/openai.com\/blog\/ai-and-compute\/\">by a factor of 10 every year<\/a> (figure 1), and Chinese tech giant Huawei has <a href=\"https:\/\/venturebeat.com\/2021\/04\/29\/huawei-trained-the-chinese-language-equivalent-of-gpt-3\/\">recently developed<\/a> its own even larger version of GPT-3.<\/p>\n<p><strong>New AI is riskier than old AI<\/strong><\/p>\n<p>Like nuclear technology and biotech, AI is a dual-use technology: it lends itself equally to beneficial and malicious applications. While systems like GPT-3 can solve valuable problems like automating customer support services or summarizing reports, they can also be used by malicious actors to execute highly customized phishing attacks or undermine public discourse by using context-aware text-generating bots. Similarly, a sufficiently general AI designed for gaming or robotics applications might be repurposed to <a href=\"https:\/\/warontherocks.com\/2019\/09\/terrorist-groups-artificial-intelligence-and-killer-drones\/\">automate the flight of a weaponized drone<\/a>.<\/p>\n<p>As AI capabilities increase exponentially, so will the destructive footprint of malicious actors who leverage them. AI policy experts at Oxford, Cambridge, Stanford and OpenAI <a href=\"https:\/\/arxiv.org\/ftp\/arxiv\/papers\/1802\/1802.07228.pdf\">have warned <\/a>about the prospect of an unprecedented leap in the range of malicious applications of AI systems.<\/p>\n<p>Just as crucially, there is a growing amount of evidence showing that advanced AI systems are <a href=\"https:\/\/techcrunch.com\/2020\/08\/07\/here-are-a-few-ways-gpt-3-can-go-wrong\/?guccounter=1&amp;guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&amp;guce_referrer_sig=AQAAALg3bfyRnRHSP02JUD4l7fxN7hMpRCa22chF6jyD2QLs8IzAqcfpOvLpEQh5kC2jp1DgSuyJUOT6Vpz-_R6_OJp4AyWqr4a17gjqMqPowM3WlP7HtD7j7TP9fFJofLqFreD9Z4GiWToVYqRdkkqVRLPbunoUQaPQcLUkZauryfbh\">less predictable<\/a> and <a href=\"https:\/\/openai.com\/blog\/faulty-reward-functions\/\">harder to control<\/a> than expected. There is currently no regulation on the development of AI to ensure that it is safe. This will need to change: as our infrastructure, economy and military become more dependent on AI, the likelihood and impact of accidents will increase.<\/p>\n<p>Canada is a laggard in addressing data, AI, and on-the-horizon technology threats. Our AI policy is designed to respond to narrow problems rather than anticipate general ones, and treats AI as an economic, procurement or defense issue rather than a public-safety challenge. In contrast, the European Commission has proposed a <a href=\"https:\/\/ec.europa.eu\/commission\/presscorner\/detail\/en\/IP_21_1682\">rules-based approach<\/a> to guarantee the \u201csafety and fundamental rights of people and business.\u201d While even the European strategy is not designed with general-purpose AI in mind, it represents an acknowledgement of the important public safety challenge AI represents.<\/p>\n<p>Canada\u2019s wait-and-see approach fails to address the flexibility and rapid pace of development of increasingly general-purpose AI \u2013 a domain in which a single breakthrough can create serious vulnerabilities across multiple sectors in short periods of time.<\/p>\n<p><strong>AI tracking is urgently needed<\/strong><\/p>\n<p>As AI systems become exponentially more capable, and the cost of training and deploying them plummets thanks to various <a href=\"https:\/\/openai.com\/blog\/ai-and-efficiency\/\">efficiency improvements<\/a>, increasingly powerful AI systems will proliferate, making rapid policy responses necessary. Know-your-customer requirements may have to be imposed on cloud service providers whose computational resources are used to train large AIs. Governments may need to track the sale of computing resources. While the time for these measures has not yet come, there are simple but crucial pieces that we must put in place today.<\/p>\n<p>Canada urgently needs a mechanism to track AI development through a public safety lens \u2013 one with the capacity to monitor advances in AI, inform stakeholders, and make recommendations. A single team of technical and policy experts could track the progress being made at the handful of AI labs that are currently leading the push toward flexible and general AI, and build relationships with them to facilitate forecasting and analysis.<\/p>\n<p>The status quo means falling behind \u2013 or worse, flying blind \u2013 on what may become the defining policy issue of the century.<\/p>\n<p>The investment required to hedge against this risk is small, but our window to do this won\u2019t be open forever: AI is evolving fast, and playing catch-up isn\u2019t an option. When it closes, we may find ourselves wandering in the dark through one of the most significant tests of technology governance Canada has ever faced.<\/p>\n<p><em>Readers with questions about the intersection between public safety and the emerging AI landscape can reach Jeremie at jeremie@sharpestminds.com.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A manager hires a new employee, and offers to pay her $1,000 a day. She replies: \u201cI\u2019ll do you one better. Why don\u2019t you pay me one penny on my first day, and double my pay every day from there until the month is over?\u201d Sensing a bargain, the manager agrees. By the end of [&hellip;]<\/p>\n","protected":false},"featured_media":279277,"template":"","meta":{"_acf_changed":false,"content-type":"","ep_exclude_from_search":false,"apple_news_api_created_at":"2025-10-08T03:31:36Z","apple_news_api_id":"89574c28-9917-45cc-b0b7-7ce5f247e616","apple_news_api_modified_at":"2025-10-08T03:31:36Z","apple_news_api_revision":"AAAAAAAAAAD\/\/\/\/\/\/\/\/\/\/w==","apple_news_api_share_url":"https:\/\/apple.news\/AiVdMKJkXRcywt3zl8kfmFg","apple_news_cover_media_provider":"image","apple_news_coverimage":0,"apple_news_coverimage_caption":"","apple_news_cover_video_id":0,"apple_news_cover_video_url":"","apple_news_cover_embedwebvideo_url":"","apple_news_is_hidden":"","apple_news_is_paid":"","apple_news_is_preview":"","apple_news_is_sponsored":"","apple_news_maturity_rating":"","apple_news_metadata":"\"\"","apple_news_pullquote":"","apple_news_pullquote_position":"","apple_news_slug":"","apple_news_sections":[],"apple_news_suppress_video_url":false,"apple_news_use_image_component":false},"categories":[9387,9372,9383,9403],"tags":[8547,9237,8569],"article-status":[],"irpp-category":[4374,4337,4385],"section":[],"irpp-tag":[7102],"class_list":["post-269428","issues","type-issues","status-publish","has-post-thumbnail","hentry","category-elaboration-de-politiques","category-recent-stories-fr","category-sciences-et-technologies","category-securite-nationale","tag-cybersecurite-fr","tag-intelligence-artificielle","tag-securite-nationale","irpp-category-innovation","irpp-category-science-et-technologie","irpp-category-securite-nationale","irpp-tag-cybersecurite"],"acf":[],"apple_news_notices":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.8 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>It\u2019s time for a public-safety conversation about artificial intelligence<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/policyoptions.irpp.org\/fr\/2021\/05\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"It\u2019s time for a public-safety conversation about artificial intelligence\" \/>\n<meta property=\"og:description\" content=\"A manager hires a new employee, and offers to pay her $1,000 a day. She replies: \u201cI\u2019ll do you one better. Why don\u2019t you pay me one penny on my first day, and double my pay every day from there until the month is over?\u201d Sensing a bargain, the manager agrees. By the end of [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/policyoptions.irpp.org\/fr\/2021\/05\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/\" \/>\n<meta property=\"og:site_name\" content=\"Policy Options\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/IRPP.org\" \/>\n<meta property=\"article:modified_time\" content=\"2025-10-08T03:31:33+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/policyoptions.irpp.org\/wp-content\/uploads\/2025\/08\/Its-time-for-a-public-safety-conversation-about-artificial-intelligence.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1920\" \/>\n\t<meta property=\"og:image:height\" content=\"1080\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@irpp\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/policyoptions.irpp.org\/fr\/2021\/05\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/\",\"url\":\"https:\/\/policyoptions.irpp.org\/fr\/2021\/05\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/\",\"name\":\"It\u2019s time for a public-safety conversation about artificial intelligence\",\"isPartOf\":{\"@id\":\"https:\/\/policyoptions.irpp.org\/fr\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/policyoptions.irpp.org\/fr\/2021\/05\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/policyoptions.irpp.org\/fr\/2021\/05\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/policyoptions.irpp.org\/wp-content\/uploads\/2025\/08\/Its-time-for-a-public-safety-conversation-about-artificial-intelligence.jpg\",\"datePublished\":\"2021-05-12T14:31:39+00:00\",\"dateModified\":\"2025-10-08T03:31:33+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/policyoptions.irpp.org\/fr\/2021\/05\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/policyoptions.irpp.org\/fr\/2021\/05\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/policyoptions.irpp.org\/fr\/2021\/05\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/#primaryimage\",\"url\":\"https:\/\/policyoptions.irpp.org\/wp-content\/uploads\/2025\/08\/Its-time-for-a-public-safety-conversation-about-artificial-intelligence.jpg\",\"contentUrl\":\"https:\/\/policyoptions.irpp.org\/wp-content\/uploads\/2025\/08\/Its-time-for-a-public-safety-conversation-about-artificial-intelligence.jpg\",\"width\":1920,\"height\":1080},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/policyoptions.irpp.org\/fr\/2021\/05\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/policyoptions.irpp.org\/fr\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"It\u2019s time for a public-safety conversation about artificial intelligence\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/policyoptions.irpp.org\/fr\/#website\",\"url\":\"https:\/\/policyoptions.irpp.org\/fr\/\",\"name\":\"Policy Options\",\"description\":\"Institute for Research on Public Policy\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/policyoptions.irpp.org\/fr\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"It\u2019s time for a public-safety conversation about artificial intelligence","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/policyoptions.irpp.org\/fr\/2021\/05\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/","og_locale":"fr_FR","og_type":"article","og_title":"It\u2019s time for a public-safety conversation about artificial intelligence","og_description":"A manager hires a new employee, and offers to pay her $1,000 a day. She replies: \u201cI\u2019ll do you one better. Why don\u2019t you pay me one penny on my first day, and double my pay every day from there until the month is over?\u201d Sensing a bargain, the manager agrees. By the end of [&hellip;]","og_url":"https:\/\/policyoptions.irpp.org\/fr\/2021\/05\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/","og_site_name":"Policy Options","article_publisher":"https:\/\/www.facebook.com\/IRPP.org","article_modified_time":"2025-10-08T03:31:33+00:00","og_image":[{"width":1920,"height":1080,"url":"https:\/\/policyoptions.irpp.org\/wp-content\/uploads\/2025\/08\/Its-time-for-a-public-safety-conversation-about-artificial-intelligence.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_site":"@irpp","twitter_misc":{"Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/policyoptions.irpp.org\/fr\/2021\/05\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/","url":"https:\/\/policyoptions.irpp.org\/fr\/2021\/05\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/","name":"It\u2019s time for a public-safety conversation about artificial intelligence","isPartOf":{"@id":"https:\/\/policyoptions.irpp.org\/fr\/#website"},"primaryImageOfPage":{"@id":"https:\/\/policyoptions.irpp.org\/fr\/2021\/05\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/#primaryimage"},"image":{"@id":"https:\/\/policyoptions.irpp.org\/fr\/2021\/05\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/#primaryimage"},"thumbnailUrl":"https:\/\/policyoptions.irpp.org\/wp-content\/uploads\/2025\/08\/Its-time-for-a-public-safety-conversation-about-artificial-intelligence.jpg","datePublished":"2021-05-12T14:31:39+00:00","dateModified":"2025-10-08T03:31:33+00:00","breadcrumb":{"@id":"https:\/\/policyoptions.irpp.org\/fr\/2021\/05\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/policyoptions.irpp.org\/fr\/2021\/05\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/policyoptions.irpp.org\/fr\/2021\/05\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/#primaryimage","url":"https:\/\/policyoptions.irpp.org\/wp-content\/uploads\/2025\/08\/Its-time-for-a-public-safety-conversation-about-artificial-intelligence.jpg","contentUrl":"https:\/\/policyoptions.irpp.org\/wp-content\/uploads\/2025\/08\/Its-time-for-a-public-safety-conversation-about-artificial-intelligence.jpg","width":1920,"height":1080},{"@type":"BreadcrumbList","@id":"https:\/\/policyoptions.irpp.org\/fr\/2021\/05\/its-time-for-a-public-safety-conversation-about-artificial-intelligence\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/policyoptions.irpp.org\/fr\/"},{"@type":"ListItem","position":2,"name":"It\u2019s time for a public-safety conversation about artificial intelligence"}]},{"@type":"WebSite","@id":"https:\/\/policyoptions.irpp.org\/fr\/#website","url":"https:\/\/policyoptions.irpp.org\/fr\/","name":"Policy Options","description":"Institute for Research on Public Policy","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/policyoptions.irpp.org\/fr\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"}]}},"_links":{"self":[{"href":"https:\/\/policyoptions.irpp.org\/fr\/wp-json\/wp\/v2\/issues\/269428","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/policyoptions.irpp.org\/fr\/wp-json\/wp\/v2\/issues"}],"about":[{"href":"https:\/\/policyoptions.irpp.org\/fr\/wp-json\/wp\/v2\/types\/issues"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/policyoptions.irpp.org\/fr\/wp-json\/wp\/v2\/media\/279277"}],"wp:attachment":[{"href":"https:\/\/policyoptions.irpp.org\/fr\/wp-json\/wp\/v2\/media?parent=269428"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/policyoptions.irpp.org\/fr\/wp-json\/wp\/v2\/categories?post=269428"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/policyoptions.irpp.org\/fr\/wp-json\/wp\/v2\/tags?post=269428"},{"taxonomy":"article-status","embeddable":true,"href":"https:\/\/policyoptions.irpp.org\/fr\/wp-json\/wp\/v2\/article-status?post=269428"},{"taxonomy":"irpp-category","embeddable":true,"href":"https:\/\/policyoptions.irpp.org\/fr\/wp-json\/wp\/v2\/irpp-category?post=269428"},{"taxonomy":"section","embeddable":true,"href":"https:\/\/policyoptions.irpp.org\/fr\/wp-json\/wp\/v2\/section?post=269428"},{"taxonomy":"irpp-tag","embeddable":true,"href":"https:\/\/policyoptions.irpp.org\/fr\/wp-json\/wp\/v2\/irpp-tag?post=269428"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}