Artificial intelligence (AI) is becoming more and more common in our everyday lives. The technology is being used in a wide range of areas, such as advertising, health care, banking and manufacturing, to name a few. It is a massive advance in the tech sector that can benefit almost all levels of society.
But with the benefits, there are also risks. Because of the speed at which the technology is advancing, it can be an unpredictable, or even a malicious tool that policy-makers are ill equipped to deal with, especially because of the speed at which the technology is advancing.
This became apparent with the development of an AI system called GPT-3. GPT-3 was designed to be simple AI tasked with learning how to auto-complete a sentence, but it taught itself a suite of seemingly unrelated tasks, including how to write articles that are indistinguishable from writing done by a human. Its evolution raised alarm bells within the AI developing community, who had concerns about its impact on public safety.
Our guest today is dedicated to filling the knowledge gap between AI developers and policy- makers. Jérémie Harris is a former physicist and Silicon Valley tech start-up founder who left the tech industry to collaborate with AI policy leaders around the world, including the former heads of AI policy at the U.S. Department of Defense, the World Economic Forum, and top AI labs like OpenAI and Anthropic. He has developed a plan that will enable the federal government to monitor and create policy around AI so that Canada can stay ahead of the curve. Jérémie joins the podcast to discuss the current public safety risks posed by AI, and what the government can do to mitigate them.