For nearly a decade, my research has focused on the ways that artificial intelligence (AI) perpetuates bias against Black communities and other racialized or minority groups.

The evidence is overwhelming about the harm to Black communities and people, in particular. Facial recognition systems frequently misidentify Black individuals, resulting in false arrests. At the same time, predictive policing algorithms – trained on historically biased crime data – reinforce the overpolicing of Black neighbourhoods.

From biased hiring software to racially skewed medical tools, AI’s harm to Black communities is immediate and tangible. Yet time and again, Canadian policymakers have failed to heed this or just don’t get it.

The evidence of harm is overwhelming

Consider Bill C-27, federal government legislation that sought to regulate AI in Canada. I watched every hearing of the House of Commons standing committee on industry and technology, hoping for a serious discussion of AI’s racial biases. It never happened.

I then wrote to the committee chair and members, urging them to engage with the issue, but to no avail. When the bill died following prorogation of the Commons for the April federal election, there was hope that the next attempt at AI legislation would do better for Black Canadians.

That hope briefly came back to life when Prime Minister Mark Carney appointed Canada’s first minister for artificial intelligence – a promising signal that Ottawa might be ready to take AI governance seriously.

But that optimism quickly faded when the new minister, Evan Solomon, made it clear he’s far more interested in AI’s economic benefits than in regulating its harms, saying Canada would stop “over-indexing on warnings and regulation.”

Then in September, Solomon unveiled Ottawa’s AI Strategy Task Force. Not one of the 27 original members was Black. This omission was so glaring that I sent an inquiry to his office. The response: “We have not received confirmation that a member of the Task Force self identifies as Black at this time.”

How could a national AI strategy in 2025 exclude the very community most adversely impacted by AI? While many groups are affected, sector-specific research consistently shows that Black people experience the most severe and widespread harm in each area examined.

A task force with a glaring omission

I decided to speak out. I assembled the portraits of all 27 task force members and shared a composite photo on a LinkedIn post, highlighting the glaring absence of a single Black face. The post garnered more than 10,000 impressions, almost 1,200 reactions and hundreds of comments because it resonated with many people who felt similarly outraged.

I also worked with fellow Black academics, professionals and allies to draft an open letter to Carney and Solomon. Sixty of us signed it, demanding genuine representation for Black Canadians in AI policymaking.

We sent the letter on Oct. 15. What happened next was telling. The government updated the task force’s online roster to add one Black member – a university student whose background does not reflect substantive expertise in AI.

The student’s inclusion is not the issue here. Rather, the timing and nature of the appointment suggested a reactive gesture rather than a meaningful effort to address representation. It risked reducing a serious concern to a symbolic response.

Racial bias in AI should be the immediate concern

The time for a law on artificial intelligence has come

A Canadian blueprint for trustworthy AI governance

Regardless of the student’s experience or interest in the subject, there were better options. Black participation in national AI policy must be meaningful, qualified and transparent. There is a wealth of Black experts in Canada’s tech and AI ecosystem. None of them were appointed to this task force.

The decision to ignore an entire community of qualified voices is a profound failure of leadership. It suggests that Black perspectives were an afterthought – valued only as optics, not as a source of insight.

Why Black expertise matters in AI governance

Our community deserves stronger representation at the table. Who better to help develop guardrails for racial bias in AI than those who have already felt its sting?

The Black community understands viscerally what is at stake when algorithms decide how long you spend in jail, whether you get a job interview, a loan or suffer a false arrest. Our lived experiences and expertise would only strengthen (not weaken) Canada’s AI strategy, making it more robust and more just for everyone.

Yet, the message from those in charge has been clear: they don’t really want us to participate in developing AI strategy.

That is why I decided to take a stand: As a Black scholar whose decade of research has identified the real harm AI poses to the Black community, and one who believes in the genuine participation of this community in addressing that harm, I could not in good conscience take any step directly or indirectly that would lend moral legitimacy to the current composition of Canada’s AI task force.

Therefore, I refrained from making any submission during its consultation process, which ended Oct. 31.

When Black voices are meaningfully included, I and others in the Black community will be happy to contribute.

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Gideon Christian photo

Gideon Christian

Gideon Christian is an associate professor and university research chair in AI and law at the University of Calgary. His research focuses on racial bias in AI technologies.

Related Stories