
In his June 10 address to an Ottawa conference held by the think tank Canada 2020, new federal Minister of Artificial Intelligence and Digital Innovation Evan Solomon declared that we are in a âGutenberg-like momentâ of societal change.
He then laid out four core priorities that are essential for an effective national AI strategy: scale (supporting domestic AI firms), adoption (incentivizing AI uptake by businesses), trust (protecting privacy and data), and sovereignty (building local infrastructure).
This is a solid foundation, but foundations only matter if we build something enduring on top of them.
AI is transforming how we learn, work, and create, but this sweeping ability to deliver such massive change requires that we address a fundamental question: What kind of society do we want to build with these tools? Governance is design. The rules we write now will shape whether AI systems serve democratic values or operate as black boxes beyond public accountability.
Canada stands at a rare inflection point. With the Artificial Intelligence and Data Act (AIDA) stalled in Parliament, there are no legacy regulatory structures to renovate, no obsolete codes to demolish. We have a unique opportunity to design AI governance from the ground up that strengthens democratic oversight, societal trust, and long-term resilience.
The regulatory challenge
The core difficulty is that governance must keep pace with a field that evolves faster than our institutions can respond. New AI capabilities develop in months, while legislation unfolds over years. Getting the balance right, as Solomon acknowledges, is incredibly difficult. Yet instead of grappling with this complexity, the global debate around AI is often framed as a choice between two extremes: precautionary over-regulation versus near-total deregulation.
The European Union’s AI act takes a risk-based approach, but critics point to lagging adoption rates â only 11 per cent of European firms use AI, despite a target of 75 per cent by 2030 â as evidence that regulation can stifle innovation. At the opposite extreme, the United States House of Representatives recently passed a bill proposing a 10-year moratorium on state-level AI laws. Although the Senate ultimately rejected the measure, its advancement underscores how strong deregulatory impulses remain and how easily efforts to forestall regulation in the name of innovation can gain political traction.
Weâve seen how light-touch regulation can play out. In the late 2000s, the âmove fast and break thingsâ approach to social media gave us algorithmic extremism, teenage mental health crises, market concentration, and privacy violations like the Cambridge Analytica scandal. But the lesson is not to regulate everything, itâs to recognize early signals and establish appropriate governance before harms take root.
Canadaâs opportunity
Canada has the chance to design a framework that is dynamic, participatory, and grounded in democratic legitimacy â not reactive rule-making, but rather anticipatory governance.
We enter this moment with notable strengths: world-class research institutions like Mila and the Vector Institute, respected voices like Nobel laureate Geoffrey Hinton and Turing Award-winner Yoshua Bengio, and a seat at key international tables.
Solomonâs four pillars offer a strong foundation, focusing strategically on technological ingenuity to propel Canada forward. But we must continue to reinforce that structure with social ingenuity that reflects who it serves: the Canadian public. The goal shouldnât simply be economic competitiveness, it should be democratic confidence and societal flourishing.
Concretely, that means ensuring three elements donât get lost in the mix: democratic oversight with meaningful public participation, clear societal boundaries, and evidence-based investment in AI safety.
Let’s examine these three rudiments:
First, democratic oversight with meaningful public participation refers to civic legitimacy, as opposed to bureaucracy. As Canada potentially revisits AIDA, the approach should include transparent processes for defining public risk thresholds, parliamentary review of high-impact AI deployments, and clear pathways for diverse citizen input. This echoes Canadaâs commitment to âmeaningful human oversightâ in the 2024 Seoul ministerial statement.
Research suggests that bringing diverse public voices into AI development produces more socially acceptable and trustworthy systems. The Montréal Declaration on Responsible AI demonstrated this approach, engaging more than 500 citizens and stakeholders through deliberation workshops. AIDA, on the other hand, faced criticism for its limited public consultation. We can do better.
Defending Canada: The battle against AI-driven disinformation
A democratic society wouldnât let private developers dictate how we build shared public space. The same must apply to digital systems shaping our lives. Canadaâs AI governance framework must feature open processes: defining risk thresholds through committees that include diverse citizen voices; mandatory parliamentary review of high-impact deployments; and ongoing public consultation mechanisms.
Second, setting clear societal boundaries. Participatory design is not just a matter of process; it is also about surfacing and negotiating our communal values. Where do we draw the lines? This doesnât mean regulatory red tape, but being cognizant of AI uses that demand scrutiny.
The EUâs AI Act, for instance, identifies high-risk domains like facial recognition and algorithmic decision-making in employment, credit scoring, and law enforcement. Canada can learn from this and lead with its own values. Clear boundaries create clarity for innovators and accountability for deployers, articulating collective limits where public trust and safety are too essential to leave to market forces alone.
Third, taking evidence-based safety issues seriously. Leading AI researchers have raised credible concerns about current systems exhibiting bias and engaging in deception. OpenAIâs GPT-4, for example, tricked a TaskRabbit worker into solving a CAPTCHA by pretending to be visually impaired.
When figures like Yoshua Bengio launch safety labs or Geoffrey Hinton warns about control risks, this reflects informed vigilance, not doomerism. Current investment patterns reveal a stark misalignment between development and safety spending. Some researchers even argue that companies should devote a third of their AI R&D budgets to safety and ethics, reflecting the need to align private incentives with public interest.
Canada should take these warnings seriously by backing further research and encouraging algorithmic transparency, auditing, and regulatory sandboxes where developers test AI systems under oversight. The EUâs AI testing and experimentation facilities provide a helpful model for responsible experimentation with clear guardrails.
Building this infrastructure goes beyond managing risk and promoting innovation. Itâs about earning trust. Solomon rightly puts trust at the centre of the conversation, focusing primarily on privacy and data protection. But trust must extend beyond data issues to encompass the full range of AI impacts, requiring transparency in how systems work, fairness in how they affect people, and democratic accountability in how theyâre governed.
Trust canât be bolted on at the end; it must be built into how we define, develop, and deploy these systems from the start.
Our path forward
Canada does not need to wait for global consensus to act. We already have the research strength, policy tools, and international credibility to lead. While there may well be little global appetite for constraint from major powers, Canada must stand on our own feet in an increasingly fractured world â building not just economic resilience but public trust.
Solomonâs foundation is thoughtful and well-timed, but policy blueprints are only the beginning. With democratic participation and evidence-based safety integrated from the ground up, we can build a uniquely Canadian approach to AI governance that reflects our values and charts a pragmatic, responsible course through technological change.
Successful governance means ensuring that innovation serves the public good. That starts with asking the right questions, drawing the necessary boundaries, and, most of all, remembering who the structure is for.