Canada risks enacting an artificial intelligence (AI) policy that protects people from harm, but not from irrelevance.

That tension is already visible in recent debates over youth and technology. Delegates to the recent Liberal Party national convention backed a social media ban for Canadians under 16, while federal and provincial governments are moving on issues of online safety. Protection may be necessary, but young people should not only be governed by digital policy. They should also have meaningful ways to help shape it.

More broadly, Ottawa is moving to secure Canada’s AI position through major public investment. Recent federal budgets have committed billions to computing capacity, infrastructure and adoption, while the recent spring economic update proposed a Canada Strong Fund to invest in strategic Canadian projects and companies.

These investments matter. But they also reflect a familiar pattern. When confronted with technological change, governments reach first for regulation, infrastructure and redistribution.

All are necessary. None is sufficient.

If Canada is to remain competitive in the AI era while maintaining social cohesion, it must move beyond managing risks and compensating losses. It must adopt a third approach – a contributive model of AI governance.

AI threatens not only employment, but also relevance. Entire categories of skills risk being devalued, leaving people formally included in the economy yet disconnected from meaningful participation.

Canada is not starting from scratch. Existing investments in AI infrastructure, research, skills and adoption already point in the right direction. The next step is to organize them around a clearer goal: ensuring Canadians can contribute meaningfully to an AI-driven economy.

That means building stronger talent pipelines, giving workers flexible transition support and creating more public interest AI pathways in fields such as health care, climate, education and public administration.

The limits of regulation and redistribution

Current AI policy is largely built on two assumptions.

The first is that technology must be regulated to minimize harm, whether in the form of misinformation, labour displacement or algorithmic bias. The second is that its economic consequences can be addressed by compensating those who lose out.

These approaches are essential. But they are reactive by design.

They treat AI as something to be contained or corrected, rather than something to be actively shaped through participation. In doing so, they overlook a deeper and more destabilizing force behind public anxiety.

The real fear is not just job loss or digital harm. It is the growing sense of becoming unnecessary.

That fear is not irrational. Recent estimates suggest that about 60 per cent of employees may be highly exposed to AI-related job transformation, even though this exposure does not automatically mean displacement. Statistics Canada has also noted that around half of those highly exposed workers are in jobs where AI is more likely to complement than replace their job.

The risk of superfluity

A society in which large segments of the population feel superfluous – being present, but not needed – will struggle to sustain legitimacy, no matter how effective its redistributive mechanisms.

In The Origins of Totalitarianism, political theorist Hannah Arendt used superfluousness to describe a deep political pathology of modern mass society: people may be formally present while losing a meaningful place in common life.

AI-driven labour disruption is obviously different, but the warning is relevant. Redistribution can mitigate hardship, but it cannot replace the sense of being needed.

This is where current policy frameworks fall short.

The labour market is already shifting. From 2018-24, job postings asking for AI skills in Canada nearly tripled, according to the Future Skills Centre and Conference Board of Canada. Meanwhile, OECD research finds that AI is already changing the mix of skills that Canadian employers demand, especially in occupations with higher exposure.

Toward a contributive model

A contributive model does not reject the current federal approach. It extends it. Canada already invests in AI research, standards, computing capacity, adoption and skills. The question is whether those investments are helping Canadians participate in AI’s development and use, rather than simply adapt to decisions made elsewhere.

This requires a shift in emphasis: from access to technology toward capacity to shape it; from broad retraining toward practical pathways for different kinds of workers; and from public consultation toward sustained participation in implementation.

The goal is not to make every Canadian an AI specialist. It is to ensure that every student, worker, public servant, small business owner and community organization has meaningful ways to keep pace with AI and contribute to how it is used.

Three key approaches to adopt

The federal government should act in three areas:

First, it should broaden the AI talent pipeline. Canada has strong research capacity, but more of it should be connected to colleges, polytechnics, startups and applied learning. Initiatives such as Amii’s AI Workforce Readiness Program, Canada’s largest undergraduate conference CUCAI and the AI4Good Lab show how technical talent can be developed outside a narrow research track. The goal should be a wider base of Canadians able to build, adapt and apply AI.

Second, Ottawa should help workers and non-specialists keep pace. AI disruption will often arrive task by task, not occupation by occupation. A clerk, teacher, nurse, technician, public servant or small business employee may not need to become an AI engineer, but they will need practical ways to understand and use AI in their work. The Future Skills Centre, DIGITAL and community-based AI literacy efforts offer useful starting points but need to be scaled and better connected.

Third, it should expand public-interest AI participation. If AI is going to reshape health care, education, climate policy and public administration, Canadians should have structured ways to help shape those uses. Gen(Z)AI, Mila’s AI Policy Fellowship and the federal AI Strategy Task Force process offer useful models. But participation cannot stop at consultation papers or expert roundtables. It should be also linked to implementation through fellowships, civic AI labs and applied public-interest projects.

Competitiveness through participation

A contributive model is not only socially desirable. It is economically strategic.

Countries that succeed in the AI era will not be those that simply regulate effectively or redistribute efficiently. They will be those that mobilize the widest possible base of talent and participation.

That is especially important in Canada, where business adoption is rising but still uneven.

Innovation sovereignty needs ecosystems, not isolation

Canada’s labour protections aren’t ready for the age of AI

Statistics Canada recently reported that 12.2 per cent of businesses were using AI to produce goods or deliver services in the second quarter of 2025, up from 6.1 per cent in the second quarter of 2024.

The direction is clear: AI adoption is increasing, but it remains far from universal. That makes this the moment to broaden participation before the benefits and capacities of AI become concentrated too narrowly.

Canada’s competitive advantage has always rested on its ability to combine innovation with inclusion. But in the context of AI, inclusion must be redefined. It is no longer enough to ensure access to benefits. We must also ensure access to meaningful contribution.

The debate around AI often oscillates between optimism and fear, between promises of productivity and concerns about displacement. A contributive framework recognizes that disruption is real and must be managed but insists that the legitimacy of technological change depends on whether people feel they have a place within it.

What Canada needs is an AI strategy that treats contribution, not just protection, as a public good.

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence. Photographs cannot be republished.

Leo Yang photo

Leo Yang

Leo Yang is a politics, philosophy and economics (PPE) student at Queen’s University and an advocate for youth civic education and sustainability. He served two years as an elected university senator and is an incoming master's student at the London School of Economics.

Related Stories