Artificial Intelligence is a hot topic, but understanding how it works means developing policies that can maximize benefits while mitigating risks.
When COVID-19 upended plans in the U.K. for secondary school graduate exams this year, a new system was introduced that created an uproar. Teachers assigned grades based on past test scores, which were then adjusted and confirmed by a computer algorithm. However, the AI-enabled system relied on historic data that led to inflated results for students from wealthier neighbourhoods and reduced scores for those from less advantaged ones. The results raised criticism from teachers, families and ethics experts alike, citing biased data and lack of ethical principles. This is an example of the risks that can arise when governments deploy AI-enabled systems without careful consideration and regulatory frameworks.
Concerns have already been raised about the impact of racially biased data and facial recognition applications used by police forces, the security and privacy of vital personal health data collection and analysis, and government regulation falling behind as tech firms continue to operate without oversight.
Issues of privacy came to light in October of this year when it was revealed that Cadillac-Fairview was collecting data and using facial recognition software in its 12 malls across Canada in 2018 without shoppers’ consent. It has since stopped the practice after concerns were raised by privacy commissioners at both the federal and provincial level.
The application of AI technology by all actors, not just governments, has seen exponential growth, thanks to more efficient and accessible ways to collect and analyze massive datasets. The increasing adoption of AI applications has led to greater calls for regulation. Policy-makers should understand both the promises and pitfalls of AI, including its many ethical considerations.
We’re not talking about science fiction dystopias with robots taking over the planet. This is about getting up to speed with what’s actually happening now and how it will impact our future.
In 2017, the Government of Canada appointed CIFAR, a Canadian-based global research organization, to develop and lead the world’s first artificial intelligence strategy. In 2018, CIFAR partnered with the Brookfield Institute for Innovation + Entrepreneurship, anticipating the need for tools that would help policy-makers better understand AI. Together, we developed the AI Futures Policy Labs, a series of workshops explaining what AI is (and isn’t), and where in the future AI research is taking us.
Through the first series of labs, which took place between 2018 and 2019, policy experts from government, academia, civil society and industry discussed the challenges, implications and latest developments in AI policy.
This year with the challenges of the global pandemic, CIFAR accelerated the global launch of the AI Futures Policy Labs Toolkit, a set of online materials for a self-facilitated workshop. Available free of charge in both French and English, it includes videos, worksheets, case studies and user guides. The goal is to equip policy innovators and AI researchers world-wide with the resources to explore the intersections between policy and AI within their own communities.
Tools for self-facilitated team learning
Central to the toolkit is a four-part video series, featuring some of Canada’s leading AI and policy experts. The videos and learning modules introduce fundamental topics to get workshop participants up to speed on fundamental questions facing AI researchers and policy-makers today. Questions like “What exactly is artificial intelligence?” and “What actions are being considered to respond to the new technology?”
Using case studies of emerging AI applications, workshop facilitators guide participants to help them develop their own options for public policy interventions. We engage participants from across sectors and disciplines to bring diversity to the conversations and develop holistic approaches to policy-making.
In the absence of professional facilitators, we designed the toolkit with a facilitator guide for novices, so that anyone can host their own workshop with their colleagues. Alternatively, the modules can be worked through alone. The process encourages learners to look at the big picture as they develop policy options, brainstorming with real-life examples and case studies.
Policy innovation in an age of advanced technologies
The AI toolkit is just one example of what can happen when you bring together experts from different areas of policy to develop a solution for a shared challenge. Developing creative ways for policy-makers to keep up with novel technologies and their broader societal implications is core to CIFAR’s policy work.
The toolkit equips key players with the skills, wider context, and networks they need to move effective policy forward even as new technologies emerge. We hope that this work can spark the necessary conversations to advance the deployment and responsible regulation of AI technologies for all of humanity’s benefit.