Canada needs to beef up its data infrastructure, but social indicators alone don’t always influence policy.
In their recent Policy Options post, Annie McEwen and Geranda Notten describe the lack of good quality data available to assess how well Canadian social goals are being met. They suggest that one solution would be the creation of an arm’s-length agency to report on Canadians’ well-being and social policy outcomes — a social indicators observatory.
They envisage the observatory engaging in three types of activities: producing indicators, reporting on social policy and supporting research and collaboration. Its core business function would be to regularly collect or collate data in order to provide indicators of Canadian well-being and social policy outputs.
The problem identified by McEwen and Notten is real. The data situation in Canada is embarrassing. Nonprofit organizations such as the Caledon Institute and the Child Care Resource Centre have been driven to crowd funding for the collection and distribution of key information on Canadian social policy provision such as social assistance rates and caseloads – data that were previously federally funded. Advocates for those with low income, such as John Stapleton, have had to devote their personal energies to providing basic data that ought to be made routinely available from publically funded agencies.
The problem McEwen and Notten identify is real — the data situation in Canada is embarrassing.
Before accepting the creation of an observatory as a solution, at least four questions should be answered. Have past attempts at creating indicators been successful? Who should collect the basic data from which those indicators are constructed? Who should publish social indicators? Is there a need for a set of social indicators that focuses on particular priority topics, such as poverty reduction?
We should also be clear from the outset about what we mean by social indicators. Social indicators measure progress and trends in key areas of societal well-being, and they also provide comparisons across different jurisdictions. For example, they might compare unemployment rates today with those in past years, as well as comparing unemployment trends and levels in different provinces. Indicators are usually intended to attract public attention and hence spur government action.
Two general approaches are used in constructing these indicators. One involves combining separate statistical series that describe, for example, income levels, health, unemployment or public safety, into a single statistical series that attempts to summarize all aspects of social well-being in a country or region. The United Nations uses this approach in its Human Development Index. Canada has been a leader in developing ways of creating such high-level indicators, especially the Index of Economic Well-being, currently maintained by the Centre for the Study of Living Standards.
The other approach, instead of looking for a comprehensive way of measuring well-being, simply presents existing statistical data in a way that allows for comparative analysis of trends in areas that align with policy priorities. The OECD’s Society at a Glance is likely the best international example. In Canada, the former federal department Social Development Canada used this approach in its web page “Indicators of Well-being in Canada” (which was not maintained and is now available only in archived form). A more disaggregated set of social indicators was once published by Statistics Canada in the now discontinued publication Canadian Social Trends.
In many cases, both approaches are employed. Users are provided with a set of the most relevant statistical series. However, these can also be combined into comprehensive indicators for use in some applications, for example, to compare changes in the overall well-being of different countries or regions. The Canadian Index of Wellbeing, maintained by the Faculty of Applied Health Sciences at the University of Waterloo, is an example of such a hybrid approach.
My perspective, based on several decades of experience in various social departments in the federal government and at the OECD, is that indicators have had only a modest influence on policy, at least in developed countries. Comprehensive indicators along the UN lines are simple and can capture headlines, but they are of little value in identifying where action is needed. When trying to find out what is causing changes in a comprehensive indicator, analysis can easily get bogged down in methodological issues. This can deflect attention away from a focus on the substantive policy issues.
The second approach – presenting a series of separate indicators in a common format that allows comparisons among, say, countries or provinces – avoids this trap. However, taken in isolation, such compilations will never gain much public attention. They can, however, provide information that analysts and advocates can then use in drawing attention to particular social problems and calling for action to address those problems.
It is important to remember that highly aggregated indicators showing trends and comparisons can, at best, draw attention to a potential problem, but they are of little use in pointing to policy solutions. Most social policy problems are complex in nature. Understanding the potential effects of different policy options often requires micro-analysis, i.e., the use of statistical techniques that are based on individual-level data rather than the summarized data that is the basis of nearly all social indicators. It often also requires longitudinal analysis, i.e., information about changes that take place over time in individuals’ lives, as opposed to the point-in-time cross-sectional averages that are used in social indicators.
What may therefore be most useful is the regular publication of a set of comparative social indicators that will provide standard, authoritative information that can, in turn, be used by advocates in drawing public and political attention to social problems and by analysts who can use the indicators along with other data in exploring the causes of those problems and in devising potential solutions. However, as noted, Canada has had considerable experience with indicators along these lines. Before starting something new, it would be useful to clearly identify the gap to be filled and the lessons learned from past experience.
Statistics Canada has the mandate to collect the data needed to support the development of social indicators.
Statistics Canada has the mandate to collect the data needed to support the development of indicators. This includes data relating to social programs of the different orders of government, a priority data gap identified by McEwen and Notten. Given the mandate and technical capacity of Statistics Canada, it would seem to be problematic to ask an external body, such as an observatory that has a mandate for policy-sensitive analysis (including comparing success in different jurisdictions), to also be the authoritative and neutral body that takes on the difficult task of collecting, manipulating and releasing consistent administrative data from all orders of government.
As noted, in Canada a number of different bodies have compiled and released social indicators: Statistics Canada, a federal social department, a nonprofit research body and a university faculty. These would seem to be viable bodies to also carry out any new work on indicators that may be needed. An independent observatory could certainly be added to the list of possible candidates.
Over the longer term, Statistics Canada could itself publish these kinds of indicators – possibly with an expert panel to advise on its initial contents and on major changes. These would be neutral, descriptive indicators that facilitate comparisons, perhaps along the lines of the OECD indicators – a Canadian version of Society at a Glance, or a web site based on a version of Social Development Canada’s Indicators of Well-being in Canada, updated to also include data on government programs. This would be quite similar to work that Statistics Canada has already done, and locating it there would help ensure continuity. Other bodies would use these authoritative indicators to undertake policy analysis and advocacy, and to hold governments to account. This would free the observatory to concentrate on collaboration and analyzing data produced by others.
While an observatory as proposed by McEwen and Notten would cover a range of data on social topics, it is clear that these author’ main current emphasis is on poverty reduction. If the observatory were also to collect and compile social indicators, it would need a broad mandate. Social issues are too intertwined to be broken out into separate data domains. An understanding of poverty and low income requires analysis of data about equality, health, family formation, wealth, education and many other social factors, as well as current income.
On the other hand, if other bodies were responsible for data collection and dissemination, the observatory would be able to concentrate its attention on priority topics such as poverty reduction. It would also help free up the observatory to look at other kinds of data in addition to high-level indicators, including longitudinal data and micro-level analysis.
McEwen and Notten have started what could become a productive dialogue on ways of translating evidence-driven social policy from rhetoric to reality.