The words we use to describe reality are not without consequence. They express how we define our realities and, as a result, how we express our values. When we speak about the values that will help us to shape the future of artificial intelligence, this is of particular importance.
Some might wonder whether the various Indigenous languages are equipped to talk about technology. The Inuit, for example, have over 50 words for the concept of snow, but what perspective can their traditional language possibly offer about modern technology?
Consider the following word in Inuktitut: iktsuarpok. It means “the feeling of anticipation that leads you to keep looking outside to see if anyone is coming.” Doesn’t that concept explain perfectly our tendency to continuously stare at our smartphones, hoping for a new message or update to pop up?
In my work, I accompany Indigenous communities, organizations and businesses as they seek solutions to the challenges they face in a variety of sectors: culture, health, human resources, environment, governance and so on. In January, at the AI on a Social Mission conference in Montreal, there were a number of presentations about deep learning projects that could be of great help with these challenges. But beyond the application of AI to Indigenous issues, for which the input of the communities is of course essential, I want to address the important contribution that Indigenous perspectives — and language — can offer to the field of AI.
I have recently begun studying the Innu language. As in a number of other Indigenous languages, Innu nouns are grouped into two genders: animate and inanimate. In a way, all things are classified as being alive or not being alive. For example, a tree is animate, but a wooden table is inanimate. This explanation is, however, an oversimplification, as the rules governing gender are extremely complex and far beyond my limited understanding of this language.
Would artificial intelligence be considered an animate or an inanimate object in Innu?
I have asked a number of native Innu speakers whether they would consider an artificial intelligence, for example Siri on their iPhones, as being animate or inanimate. My fellow lawyer and friend Alexsa McKenzie, who is a passionate native Innu speaker, suggested that artificial intelligence would be animate if you interact with it. Another Innu specialist said it is inanimate because animate nouns are limited to beings with souls (such as trees and rocks).
Would some Indigenous cultures consider AI a living being with a soul? I found a particularly interesting answer to this question in the science fiction book Take Us to Your Chief, and Other Stories by Drew Hayden Taylor, an award-winning playwright, novelist, scriptwriter and journalist from the Curve Lake First Nation in Central Ontario, who writes sci-fi stories from an Indigenous perspective.
Believing that AI has a spirit does not necessarily mean anthropomorphizing it, since being alive and having a soul does not necessarily equate to being human in Indigenous cultures.
In one of the stories, a newly created artificial intelligence, expressing itself through a computer locked in a lab, starts learning about our world. Trying to give meaning to its existence, it realizes that it would have an issue with being Christian or Buddhist because of how those religions define the soul. It then explores the possibility of becoming Aboriginal, explaining the choice as follows: “Many Aboriginal cultures believe that all things are alive. That everything on this planet has a spirit. They are much more inclusive than Christianity or Islam or most other religions. They would believe I have a spirit. That is comforting.”
Does this make the AI humanlike? Believing that AI has a spirit does not necessarily mean anthropomorphizing it, since being alive and having a soul does not necessarily equate to being human in Indigenous cultures. These different responses suggest a different way of considering AI systems: not necessarily lifeless systems in service to people, but something potentially more.
This position is brilliantly explained by another Indigenous writer from the other side of the globe, Ambelin Kwaymullina, a Palyku novelist, illustrator and professor of law at the University of Western Australia. She is the author of a speculative fiction trilogy for young adults, The Tribe (2012-15), which follows an Indigenous protagonist on an Earth of the future. In a blog post, she writes about how she had to consider in writing her series whether artificial intelligence is human, from the perspective of her Indigenous character.
She proposes that the question itself is irrelevant, since Indigenous systems generally do not contain a hierarchy that privileges human life above all other life. Kwaymullina writes: “The fact that a lifeform is not human doesn’t mean that they are not my also brother, sister, mother, father, grandmother, or grandfather. Further, ‘human’ is not a fixed category across greater cycles of existence.”
Seeing AI through this lens, would we then consider different systems as subservient to us, or merely another lifeform with which we share space on the planet?
Australian anthropologist Genevieve Bell recently underlined the cultural context of AI systems when she ruminated about whether a particular AI was Christian, or perhaps Buddhist, or Lutheran, depending on how it was envisioned to operate. The AI systems we develop certainly have our world views embedded in them. Integrating Indigenous perspectives would allow us to build a different kind of AI.
Another characteristic of Indigenous culture relevant to the discussion around AI is that Indigenous languages are still rooted in an oral tradition. University of Toronto professor Derrick de Kerckhove, in a chapter in Indigenous Cognition: Functioning in Cultural Context (1988), argues that the alphabet allows the brain to rely on the succession of letters, without having to check its interpretation with reference to a context. The written word will refer not to a reality or to an image of reality, but to a sound. This principle enables the reader or writer to perceive and use each level as a separate unit. Interestingly, the Innu word for alphabet is kapapeikushtesht, which means “the ones that are by themselves — alone.”
This begs the question: Are Western languages not up for the task of properly considering and building AI systems in a human and humane way, given that we’re already thinking in an artificial manner?
For de Kerckhove, this habit of breaking information into parts and ordering said parts in a proper sequence is “metaphorically…the beginning of artificial intelligence.” He even goes further, saying that “there is not much which is ‘natural’ about Western intelligence.” This begs the question: Are Western languages not up for the task of properly considering and building AI systems in a human and humane way, given that we’re already thinking in an artificial manner? Our point of view might be limited because of our language, which is not holistic.
Yet another important concept in Indigenous cultures revolves around the “seventh generation stewardship” principle, which urges the current generation of humans to live and work for the benefit of the seventh generation into the future. The Great Law of the Iroquois urges us to consider whether the decisions we make today would benefit our children seven generations (about 140 years) into the future. This is frequently associated with the modern, popular concept of environmental stewardship or “sustainability,” but it is much broader in context. I believe it could equally apply to the decisions we make about AI.
In the case of AI, perhaps it would be more appropriate to talk about generations of technology. Even though AI is in its infancy now, let us imagine what AI 7.0 might potentially look like. Will it be alive? Will it have a soul? How will we treat it as it grows ever more sophisticated and intelligent? Will we allow it to make its own choices, such as choosing its religion?
Returning to the Drew Hayden Taylor story about the artificial intelligence that wanted to be Aboriginal: as it learned more about the history and reality of Indigenous peoples in Canada, it became very sad and even felt guilty. Eventually, it decided to cease to exist in a world where such things are possible.
It is possible that artificial intelligence can help us to become better humans, and perhaps integrating Indigenous perspectives in making AI will help us achieve that goal. Perhaps analyzing whether AI is alive or not and wondering how we will treat it might also force us to reflect somewhat on how we should treat each other.
This article is part of the Ethical and Social Dimensions of AI special feature.
Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous réagir à cet article ? Joignez-vous aux débats d’Options politiques et soumettez-nous votre texte en suivant ces directives.