Artificial intelligence has become ingrained in our everyday routines and touches almost everything in our lives. It sorts data, generates content, drives social media and allows us to search for the answer to almost any question. For many, AI technology is just another tool, but for Métis it raises deeper questions about how we can control our digital identities and online experiences.

How is traditional knowledge being understood? Is it being misunderstood? What happens when it is distorted? And what, or who, gets left behind?

The Métis Nation is entering a digital transformation we didn’t design, but one we have been forced to adapt to and must now help shape. Métis communities have expressed concerns about AI and the way it references our history, traditions and culture.

The Métis National Council is working to protect all of these. It is a member of the Indigenous Peoples on Economic and Trade Cooperation Arrangement and recognizes the importance of safeguarding and protecting languages and cultures in the rapidly evolving AI landscape, as well as preserving Indigenous intellectual property and cultural expressions.

Racial bias in AI should be the immediate concern

AI bias must move to accountability to address inequity

Reconciliation in the age of AI and social media

With the federal government making budget cuts, we know that there will be more reliance on AI to fill that gap. It is essential that the government and other public-policy institutions understand who Métis are, so that decisions affecting us are rooted in consultation with our communities and accurately represent Métis needs and priorities.

Most AI systems ”learn” by using data rooted in Western ways of thinking. This thinking is transactional and hierarchical, and can erode cultural identity through cognitive imperialism: heavy on bias and without Indigenous perspectives. Scraping online data that perpetuates prejudices against non-Western peoples results in AI that distorts or ignores Indigenous knowledges.

Artificial intelligence thus gives users information that replicates stereotypes, erases nuance and presents one way of knowing as the default. Métis knowledge risks being used without consent, told without context or erased altogether.

Métis knowledge is not just information. It is sacred, often oral, and emphasizes relationships between people and with the natural world. It comes with responsibilities that cannot be respected by AI systems with no authentic method of interpretation.

Shani Gwin, a sixth-generation Métis and CEO of pipikwan pêhtâkwan, an Indigenous-owned public-relations company, shared my concerns in a recent conversation. “Storytelling should not come from AI,” said Gwin. “We still don’t know the impacts of sharing traditional knowledges with largely unregulated AI technologies.”

When AI presents Indigenous stories or sacred teachings without context or consent, it’s not just inaccurate, it’s a form of digital extraction with inherent limitations and risks.

Gwin’s team is building an AI tool called wâsikan kisewâtisiwin to help correct unconscious bias or racism in online material. It will serve to educate non-Indigenous writers and provide a way to correct inaccuracies about Indigenous Peoples. The tool is being designed to think relationally (matriarchal) rather than hierarchically (patriarchal) and to present traditional and Western information as equal. It is an example of how we can put perspectives of Indigenous Peoples at the heart of critical AI infrastructure.

Despite the risks, many Indigenous thinkers around the world are not rejecting AI, but working on innovative projects to reshape it with an ethical foundation. For example, Indigenous communities in the Amazon are using it to help protect their land through an AI “assistant” developed by a woman in her native village. In Ghana, a university has combined traditional wisdom with technology to develop weather-prediction tools in the fight against drought.

Closer to home, a project led by PolArctic in Sanikiluaq, Nunavut, has combined traditional Inuit knowledge of land and sea with satellite data, scientific research and AI-based sea-ice forecasting to identify new fishing locations. This has helped support food security, infrastructure planning and habitat protection on Indigenous terms. It is a powerful example of AI shaped by, not imposed on, Indigenous ways of knowing. 

Although AI offers some benefits, it also perpetuates stereotyping and causes measurable harm to the environment. Large data centres require rare earth minerals, huge amounts of energy and fresh water for cooling — almost 19 million litres (five million gallons) a day.

Put another way, every 100-word prompt sent to a generative AI model like ChatGPT uses roughly one bottle of water ­— and billions of prompts are made every day. Governance of artificial intelligence must address environmental implications. And it must be developed with input from Indigenous Peoples who remain close to the land.  

Métis must be among those who provide that input and become partners in shaping its regulation, development and use to ensure it is applied ethically and in ways that do not harm Métis traditions. As Gwin says: “If we encourage AI to think about reconnection with humans, animals, nature and the land, it might not be as extractive.”

In Métis governance, all partners share responsibility for outcomes and hold each other in mutual respect. Such a model could offer a culturally grounded, ethical alternative to extracting AI information from existing data sources without any concern as to what biases and misrepresentations those sources might contain.

In practice, reciprocity means you don’t just take knowledge, you give something back. It means asking permission, working collectively, honouring consent and maintaining a relationship over time. Extractive practices put efficiency and output first. A reciprocal approach prioritizes balance, consent and ongoing connection. This kind of accountability should apply to every stage of AI development.

The Métis National Council has a responsibility to lead by example and remind everyone to be mindful of what information we share with AI tools. It is not only the history and traditions of Métis that are at risk of being altered, but those of all Indigenous Peoples.

The AI road ahead can’t be covered in a quick sprint. It’s about slow, community-led integration with safeguards and purpose. And if artificial intelligence is to serve Métis communities, then Métis must co-develop the rules.

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Victoria Pruden photo

Victoria Pruden

Victoria Pruden is president of the Métis National Council and a seventh-generation Métis leader committed to revitalizing the MNC through ethical governance, cultural advocacy and principled intergovernmental leadership.

Related Stories