Sidewalk Toronto’s high-tech and futuristic waterfront proposals for the city of Toronto are mired in controversy, even though Canadian cities have relied upon state-of-the-art technology and artificial intelligence for quite some time. Indeed, we have within our reach the ability to create urban AI that can both serve its citizens while also benefiting the private sector.

City traffic management systems (TMS), for example, employ a complex ecosystem of sensors, cameras, dedicated fibre-optic infrastructure, dashboards, control rooms, pedestrian and light boxes. TMS oversee real-time traffic control in addition to adhering to rules dictated by transportation ministries and in response to the realities of the physical spaces within which they operate.

They also silently operate and manage the ebb and flow of movement in cities. These types of AI systems are not generally the ones urban residents worry about, although people might wonder why they sit at red lights at empty intersections in the middle of the night, or why traffic never seems to really flow the way we want it to. This kind of urban AI, like all good infrastructure, only becomes visible when it breaks down.

The proposals for Toronto’s east-end waterfront, on the other hand, are an example of a proposed smart city system that has many wondering whether it will favour the values of the project’s developers and proponents – government agency Waterfront Toronto and Alphabet’s Sidewalk Labs, a Google company – rather than the interests of residents and the public.

The project has been controversial for many reasons, largely because of an opaque approval process (the full contract has yet to be made public), and critical questions it poses around data collection and uses. “With respect to the Quayside project in particular, the scope, scale and implications of data collection and use are still unclear,” the office of Toronto’s deputy city manager said in a recent report. A year-long public consultation process has just begun.

While Sidewalk Labs successfully responded to Waterfront Toronto’s call for proposals, the procurement process did not include measures to mitigate the possible negative consequences of a private-sector AI smart city. The framework agreement between Waterfront Toronto and Sidewalk labs was not shared with city staff or other levels of government before the announcement of the partnership.  That’s not terribly surprising given the focus is on efficiency, innovation and technology. There’s also scant municipal, provincial and federal policies, laws and regulations to govern and guide the deployment of these types of urban AI projects. Sidewalk Labs has signaled it might need “substantial forbearances from existing laws and regulations,” although CEO Dan Doctoroff has promised a “a rigorous privacy and data policy” will be developed.

Sidewalk Toronto plans to process not only transportation-related data collected by its sensors, it also wants to capture data about how residents consume water and energy via smart meters. That’s wise when it comes to managing consumption and the efficiency of the grid, but it’s troublesome when it comes to data collection and resale. What’s more, the proposal calls for a more nimble and integrated social service and health-care system, and the collection of data about the very private aspects of residents’ lives and their relationships with government social and health service providers.

It threatens to violate the social contract between citizens and their government. That social contract includes laws, policies and regulations that are ideally managed with oversight, transparency and accountability.

This is particularly problematic since it threatens to violate the social contract between citizens and their government. That social contract includes laws, policies and regulations that are ideally managed with oversight, transparency and accountability. The automation of health and social welfare is already being done, after all. And this kind of additional “robo welfare” risks dehumanizing the process of caring for our most vulnerable and reinforcing AI and data-based inequality.

If allowed to proceed, this would potentially shift responsibility for social services to Sidewalk Toronto, corporate entities and their proprietary AI platforms. While corporations must follow the rule of law, they nonetheless answer to shareholders; the bottom line come first. In the absence of values-based AI, that commitment to shareholders is in danger of being encoded into the Sidewalk Toronto’s AI systems.

Canadians have already experienced the outsourcing of services. Recall the business transformation models of Mike Harris’s “Common Sense Revolution” in Ontario,  the outsourcing of social welfare and workfare to Accenture  call centres and the shifting of the responsibility to administer Canadian Student Loan Service from the government to the banks and from the banks to a number of external companies.

This type of privatization has not always resulted in better, more intelligent and efficient services, never mind equitable ones. Instead, it’s often led to high costs, a reduction in transparency and a lack of protection for users who cannot opt out of services.

Sidewalk Toronto could very well process personal and private data about the lives of residents in a closed and proprietary platform that blocks open access to algorithms. In other words, this AI system would be “blackboxed,” meaning data inputs, algorithms and outcomes that are recycled back into the system are hidden from view.

It may also repeat the mistakes of earlier attempts to outsource social services to corporations, and technologically “reform” those services.

In such a scenario, residents of the proposed neighbourhood, therefore, would not have the right to access how the AI system came up with decisions that may affect their lives within the community, nor would they be able to rid the system of errors. This form of dataveillance is also a very real possibility in the absence of governance and regulation.

This fear of AI systems is not unwarranted – look at credit scores, for example, an artificial intelligence system that can be encoded with biases.  When it comes to a credit score, there is no way to opt out – your credit score is your credit score, and it is very difficult to detect and correct errors in this blackboxed AI system. Yet it exerts a tremendous amount of power over someone’s life. A negative score can determine access to a student loan, a mortgage or a place to live, and yet in the eyes of a credit-scoring company, you are not the customer, you are a data product.

There is very little consumer protection, only feeble regulation and little or no oversight. People do not have the right to see the how their score was derived, and it’s a painstaking process to correct errors in the credit score AI system.

Data brokers, on the other hand, are a form of corporate surveillance AI. They are multi-billion dollar businesses that are largely unregulated except for local privacy laws, and they collect thousands of data points about you. That data are then clustered by AI segmentation systems that may profile you for targeted marketing purposes, or worse, sell your data for surveillance purposes.

What does a principle-based urban AI system look like? The Vision Zero traffic management system in Sweden is designed with a set of socio-technological values that aims for zero deaths on the roads. That’s shifted the responsibility for traffic deaths and accidents to road system designers, engineers and operators, and not solely onto drivers, as is mostly the case in North America. Vision Zero is a TMS that values life first – efficiency and innovation are measured and accounted for in terms of death reductions and an increase in safety, in addition to smoother traffic flows and reduced fuel consumption.

Vision Zero is a collaboration of companies that develop and deploy transportation control and infrastructure services and equipment, car manufacturers, including in-car software developers, as well as autonomous car-makers, consultants and think tanks, not to mention regulators.

So what values are going to be encoded into Sidewalk Toronto or any other urban AI system?

Who governs, regulates, oversees and shapes those values? Who will protect residents from dataveillance, and will they have the right to opt out? Who decides when governments shift the responsibility of managing social and health services to proprietary platforms such as Sidewalk Toronto? Or to companies like Strava or Fitbit, that collect personal health data and are renowned to have large cybersecurity loopholes and reveal your location? Or to utility companies that resell your AI-derived consumption patterns to data brokers?

Of course, that’s already happening – just look at the privacy provisions and terms of use regarding your smart meter data. This may seem benign, but utility companies monitor each toilet flush, when the lights are on, when you are watching TV. They know when you are home, when you’re out and what you’re doing. This might be acceptable when managing the grid and informing consumption, but what of cybersecurity, and the reselling of your data to third-party actors?

Could data about residents’ health and social services needs be available to health insurers? Is this a new form of socio-technological engineering? Will people have the right to not be connected? Who will intermediate disputes? Will there be a third neutral party such as an AI ombudsperson?

And who governs in a smart city? The mayor, city councillors and residents, or Google, IBM, credit-scoring companies and data brokers? Why would we need government at all if everything can be deferred to supposedly objective, unbiased, efficient, innovative and infallible AI systems?

These are some of the important questions that must be at the forefront if the state is going to relegate responsibility to manage the private information of its citizens to corporations.

Because of these concerns, there’s now a growing public outcry over Sidewalk Toronto’s brand of urban AI that includes calls for more public engagement and oversight.

What we really need, however, is a more open and transparent public debate about the role of data and technology in our society, as well as regulations and ethics.

We are thankfully beginning to see engaged, public interest research into smart cities. A number of scholars are mobilizing knowledge gleaned from publicly funded research and translating that into recommendations, guidelines and public policy. What we really need, however, is a more open and transparent public debate about the role of data and technology in our society, as well as regulations and ethics.

We can begin by looking to the European Union’s General Data Protection Regulation (GDPR) that will, when it comes into law, give residents sovereignty over their data and the right to access and know how an AI system has made decisions about them.

Closer to home, we can follow the ethical guidelines at Public Interest and Ethical Smart Cities developed by Quebec’s Commission de l’éthique en science et en technologie (CEST).

Well-governed, well-regulated, principled urban artificial intelligence that treats the public interest as paramount, much like the systems that peacefully and efficiently help our cities run smoothly, is possible. As Sweden’s Vision Zero proves, such ethical urban AI benefits not only citizens, but the private sector as well. We have at our fingertips the ability to create a particular kind of trusted Canadian AI model that we could export with pride to the rest of the world.

This article is part of the Ethical and Social Dimensions of AI special feature.

Photo: View of port lands area at Toronto’s eastern waterfront, home of a future new high-tech neighbourhood. (Shutterstock/By JHVEPhoto)


Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous réagir à cet article ? Joignez-vous aux débats d’Options politiques et soumettez-nous votre texte en suivant ces directives.

Tracey Lauriault
Dr. Tracey Lauriault is an assistant professor of Critical Media and Big Data in the School of Journalism and Communication, Communication Studies, at Carleton University. She is also a research associate with the Programmable City Project, and the Geomatics and Cartographic Research Centre.

Vous pouvez reproduire cet article d’Options politiques en ligne ou dans un périodique imprimé, sous licence Creative Commons Attribution.

Creative Commons License