(Version française disponible ici)

ChatGPT was hugely and instantly popular when it was released late last year and people have readily adopted it in their workplaces, at school and at home. Whether it’s producing computer coding, emails, schedules, travel itineraries or creative writing, its uses seem endless. 

But like other large-language models that use artificial intelligence algorithms to produce text, image, audio and video content, such technology is not separate from our social and political realities. Researchers have already found that ChatGPT and other generative and traditional artificial intelligence (AI) can reinforce inequality by reproducing bias against and stereotypes of marginalized groups. 

Well-founded concerns about the potential of AI to perpetuate inequality have been expressed for years, and grow ever more relevant as it becomes an increasing part of our lives. Researchers and advocates have suggested there should be policies on AI that put fairness and accountability first.  

AI has immense potential. It can improve our productivity and also our predictions and decisions, which in turn can help reduce disparities. We all have unconscious biases that influence our choices and actions. It can be hard to understand how we’ve arrived at them and whether biases have played a role. Because AI is programmed and can be audited and changed, it can theoretically help us be more accurate and fairer.  

For example, researchers have explored how AI can help make processing refugee claims fairer or more accurately diagnose diseases. And a recent study shows that consultants using a later version of Chat (GPT-4) outperformed consultants who did not. Those with skills deficits were particularly likely to benefit, potentially levelling the playing field for folks without access to elite training. 

But AI is built with data that has been generated from databases of existing information, such as images, texts, and historical data, so our biases become built-in. Its effects on marginalized groups are often unrecognized even as they are perpetuated, because the technology appears to be objective and neutral. Our report on AI research outlines what scholars have found about how AI can contribute to inequity and what can be done to mitigate it.  

Data used in AI development plays a key role. ChatGPT has been trained on numerous text databases, including Wikipedia. A slew of recent articles and research has shown that the chatbot can – without intention on the part of programmers – reproduce sexist and racist stereotypes. For example, it associated boys with science and technology and girls with creativity and emotion. It suggested a “good scientist” is a white man, and readily produced racist content. In translations from Bengali and other languages with gender-neutral pronouns, ChatGPT changed these pronouns to gendered ones.  

These instances show how groups who are underrepresented, misrepresented or omitted from data will continue to be marginalized by AI trained on that material. Research shows that using more representative data can substantially reduce bias in outcomes. But in complex situations – particularly in the case of generative AI – programmers may not be able to explain how specific outputs are being reached, so it may be difficult to audit and fix them.   

Products using AI can also be designed and used in ways that further reinforce inequity. Familiar examples are Amazon’s Alexa and Apple’s Siri, which are named and gendered as women. Researchers have discussed how these AI-powered digital assistants appear to be innovative helpers, but at the same time embody gender stereotypes about women in the home. 

Profit motives may also lead companies to use AI to reproduce sexism and racism. Researcher Safiya Noble explored how Google searches for the term “Black girls” led to first-page results that sexually objectified Black girls and women because they were produced by an algorithm whose primary objective is to drive advertising. 

The consequences are grave. ChatGPT’s ability to perpetuate stereotypes may seem trivial, but repetition of biases and marginalization reinforces their existence. Unequal gender roles are created through repetition of gender norms and expectations. The same may be said of racial stereotypes. AI that is not built equitably can interfere with people’s ability to live safely and free of discrimination.  

In 2018, researchers Joy Buolamwini and Timnit Gebru demonstrated how facial recognition technology is less effective on dark skin tones because it was trained on a limited database of images. This can lead to misidentification and dangerous consequences for racialized people, as revealed by the New York Times in its reporting on the wrongful arrest of Black men. The pervasiveness of AI, combined with a lack of understanding about how it works and how to make it fair, can obscure the extent of its harms, making it potentially more damaging to equity than having humans work on similar tasks.  

Public policies at all levels of government can help shape how AI is created and used. They can require that impacts on (in)equity be assessed and that risks be reported before and after an AI-powered product or service is launched. They can also demand transparency about design, data and any disparities in that data. And they can prescribe that the public be informed when AI is used.  

Such policies should be developed with input from diverse communities and multidisciplinary experts, who have different knowledge of and perspectives on AI. This could ensure risks and effects would be considered at the outset rather than after harm is done – and would make developers accountable for their products.   

Series | How should artificial intelligence be regulated? 

Will artificial intelligence lead to more unfairness? 

Canada is failing to regulate AI amid fear and hype 

The European Union AI Act, which is still subject to approval, would be the first comprehensive law for AI requiring systems be classified based on risk. Some AI tools and programs would be banned for posing unacceptable risks, such as manipulating vulnerable groups. Others with lower risk levels, including generative AI such as ChatGPT, would face requirements to make data sources more transparent. 

Similar discussions are taking place in Canada through the proposed Artificial Intelligence and Data Act and the United States in its Blueprint for an AI Bill of Rights 

Some have suggested that regulation may stifle innovation. Policies may not be able to keep up with the speed at which AI is being developed. Industry standards will need to change to prioritize equity, safety and other social considerations. But innovation does not have to come at the expense of social considerations. Developing AI with the goal of reducing inequality is in itself innovative. 

A question about AI is how to train it to align with our social norms and values. Building AI that prioritizes values such as fairness would help create more useful products that better serve all people. AI developers that focus on groups who have historically been marginalized in AI design rather than address their interests retroactively will be innovative while also contributing to a fairer society.   

AI is used across every sector, and new technologies such as ChatGPT are becoming ever more integrated in our lives. At the same time, in many places, inequality is widening. Public and organizational policies that emphasize equitable and safe AI are crucial for a more just world.   

Read our recent series on AI:

The IRPP is holding a (free) webinar on artificial intelligence on October 5, at 1 P.M. ET.

Click here to register for this event, which will be held in French.

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission, or a letter to the editor. 
Carmina Ravanera
Carmina Ravanera is a senior research associate at the Institute for Gender and the Economy at the Rotman School of Management, University of Toronto and co-author of the recent report, An Equity Lens on Artificial Intelligence.
Sarah Kaplan
Sarah Kaplan is a distinguished professor and director of the Institute for Gender and the Economy at the Rotman School of Management, University of Toronto and co-author of the recent report, An Equity Lens on Artificial Intelligence.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License