AI can be used for social good, but citizens need to be part of the dialogue around which values we want embedded into AI policies and systems.
Artificial intelligence (AI) has the potential to be human rights’ best advocate and its worst enemy. Either it can help us reach the UN’s Sustainable Development Goals (SDGs) or, to quote internationally renowned AI scientist Yoshua Bengio, it can “increase discrepancies between the rich and the poor and be a threat to democracy.” Fears of job loss and psychological manipulation are real and require a united front.
To help us better understand and shape the social impacts of AI for the greater good, the inaugural conference AI on a Social Mission was held last month in Montreal. “AI is both a hope and a danger,” warned Bengio, who was the keynote speaker at the conference.
Over the course of the two days, inspiring start-ups convened to share how they use AI to save lives, make information on mental health more accessible, support personalized education and more. Expert panels delved into issues that included open data governance, AI’s impact on legal frameworks and the accessibility and acceptability of these new technologies. Multidisciplinary discussion groups assessed the social impacts of AI, and participants offered concrete suggestions on how to implement AI in a way that reflects humanitarian values.
The world is facing a historical shift. The concentration of researchers and AI talent in Canada positions us strategically as a country with the potential to become a world leader in artificial intelligence. To achieve that, AI must be developed ethically and responsibly to ensure its equitable and accessible implementation for everyone. We need to get AI out of the academic milieu and reach out to the community, facilitate a collective co-creation of our future and forge a united front against the risks posed by AI.
Combining AI and social work, I wanted to reach citizens through the community organizations and non-profits representing them. The event was sold out, with approximately 200 participants, including 50 nonprofits as well as numerous AI, data science, ethics and social innovation researchers.
Here are some of the recommendations that emerged from the gathering.
The algorithms innate to the machine-learning and deep-learning programs used in various applications and platforms are very complex. As a result, there’s a lot of support for an independent public body to oversee AI implementation, whether it’s an ombudsman, watchdog, auditing bureau or some kind of financial market authority. Independent experts are needed to audit systems that might one day be responsible for affecting citizens’ access to, or denial of, certain public services or privileges. This independent body would hear citizens’ complaints, analyze recurring problems and make recommendations as we gradually adapt to the implementation and use of AI in our society.
Countries around the world are racing against each other and investing heavily in AI research and application. Will leaders use AI to support social, economic and cultural rights or, inadvertently, to fuel further inequalities? Questions we can’t even think of today will arise tomorrow. A flexible and transparent body is necessary to protect our values as a society.
Multidisciplinary discussion groups at the conference pushed strongly for an “Ethical AI” certification as a tool to ensure its principled implementation. Many referred to an approach similar to the International Organization for Standardization’s (ISO) certification that could be applied to any application, tool or platform using AI. The certification would have to be well funded and structured, and evaluated quickly, given the AI race in economic and military circles. Some categories, in fact, could be fast-tracked based on public good and safety considerations.
The third loud and resounding consensus was that data must be accessible to, and useable by, the public and researchers. Suggestions were made about data being entrusted to a public body to ensure that it’s accessible and of good quality, in an effort to improve prosperity for all citizens. If data is the new pillar of democracy, how can citizens possibly harness the power of AI if they can’t use their own data?
Critically important recommendations were made on providing non-profits with better data governance mechanisms. Nonprofits serving our most vulnerable citizens must be empowered with data analysis and data science tools. This funding could allow them to build so-called “data co-ops” per sector and optimize their reach and social return on investment (SROI). A data co-op is a rapidly growing platform that allows the flow of data among communities in the cooperative social economy. The data co-op not only serves these communities, it is also owned by them.
There is a pressing need for our legal frameworks to be reviewed and adapted to our rapidly changing needs. Whether in the context of research, policy-making or product development, innovating with a new technology raises unanticipated questions. Leaders are being called upon to ask themselves whether their clients or constituents agree on what constitutes informed consent or whether they feel violated when data are shared. Can governments impose automatic consent for data-sharing based on the argument that it will benefit public health, for example? Can we and how should we redistribute the prosperity AI is promising to generate?
These questions revolve around privacy, consent, and redistribution of wealth, and they are fundamental to the governing of our society. Al require us to revisit our values as a society as we embed then into our policies, laws and regulations on AI. What are the values we want to protect in a diverse, multilingual and multicultural society – democracy, equality, diversity, safety, sustainability?
Privacy and human rights
Some of the other areas that must be regulated include: citizens’ control over their data; the right to privacy; protection against psychological manipulation and the right to explanation and appeal when prescriptive and predictive algorithms are used and human rights are at stake.
Our laws and policies must safeguard our values, and so the use of fiscal and financial incentives was also recommended. Two of the major suggestions were incentives to collaborate across sectors using tax breaks, much like a research and development tax, and the taxation of profitable uses of data to fund public services and AI research and development that serve the public good.
Social Impact Index
On a similar note, participants at the conference supported the concept of a Social Impact Index (SII). A SII would assess the value a company contributes to society, as a guide for investors and public funders. Organizations would be supported in developing SROIs and incorporating them into their return on investment reports. Governments could evaluate, through the use of algorithms, whether a company rates highly on an SII, and whether it should receive public funding.
The complexity of artificial intelligence is intimidating. Its potential impact can fuel distress and fear. To feel reassured, citizens need to understand how AI works and how it can benefit them. Civil society must be given the means not only to increase digital literacy, but also to allow an open, inclusive public dialogue on which values we want embedded into our policies, into our legal frameworks and eventually into autonomous AI systems.
Events like the AI on a Social Mission conference allow citizens to be heard through the organizations that represent them, facilitate learning and knowledge transfer, and make it possible for all to share the excitement of AI’s revolutionary potential. Events like this bring together experts of different disciplines to collaborate and share their know-how on the beneficial use of AI for humanity.
Discussing AI’s impact on society is crucial as we rethink the legal frameworks that will implement our societal values and reach the goals we set together. Those at the Montreal conference avidly recommended further opportunities for dialogue.
This article is part of the Ethical and Social Dimensions of AI special feature.
Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous réagir à cet article ? Joignez-vous aux débats d’Options politiques et soumettez-nous votre texte en suivant ces directives.