It is always difficult to keep abreast of current issues in Canadian post-secondary education. The country’s educational needs vary demographically, historically, geographically and perhaps (surprise, surprise!) even politically. But concerns about the current state of undergraduate education and the system design issues that go with it have recently been generating discussion across the country.

My first observation on this issue is that concerns for undergraduate education are not new. There has been a steady stream of published critiques of undergraduate education for most of my 21 years as president of Mount Royal University. I was greeted in my first year as president (199091) with a 1991 report by S.R. Smith, sponsored by the Association of Universities and Colleges of Canada (AUCC). I have been gifted in my final year with the 2010 book Academic Transformation by I.D. Clarke and colleagues, sponsored by the Higher Education Quality Council of Ontario. And there have been a half dozen books in between these two, all pointing out in a very public way what most know already about the changes over the past 20 years in Canadian university class sizes, part-time/full-time faculty ratios, teaching workloads, research-based reward structures and so on. None of these authors or the changes they highlight has suggested an improving experience for the undergraduate student.

My second observation, however, is that this criticism has been either ignored or discounted by most universities. This is despite the fact that all Canadian universities are primarily undergraduate, and the majority are almost totally undergraduate; despite the fact that the undergraduate environment has clearly deteriorated since 1991 (I think we would all like to go back to the environment of 1991); and despite all of the writing and evidence of this deteriorating undergraduate environment.

I appreciate that there are some good reasons for this historical lack of attention to undergraduate education.

Despite the critics, students generally have expressed satisfaction with either the experience or the brand (sometimes despite the experience) of their Canadian university degree. Canadian public universities traditionally have had a monopoly on the delivery of undergraduate degrees, giving students few choices. There are, in fact, many universities in Canada that actually excel by any measure in providing an outstanding undergraduate experience. And finally, various pressures such as funding sources (provincial and federal), national lobby priorities such as those of AUCC over the past decade or so and public ranking exercises such as the traditional Maclean’s university ranking have tilted the institutional focus toward research.

So the renewed interest in undergraduate education is welcome and overdue. But why is the subject garnering so much interest today from scholars and policy-makers alike? As is sometimes the case, it could be because undergraduate education is becoming a cause célèbre in other parts of the world. For example, in the United States there are questions about what students get for the hefty price tag on an undergraduate degree. There are also concerns about the high loan default rates among students in some private, for-profit institutions. And there, as in Canada, there has been a stream of authors questioning the efficacy of the United States’ undergraduate effort. This has led to a number of initiatives to define the expected ”œlearning outcomes” from an undergraduate degree. As Kevin Carey recently wrote in the Chronicle of Higher Education, when it comes to student learning in US universities, ”œ”˜Trust us’ will not cut it anymore.”

And elsewhere in the world (in Europe and Australia, for example) similar concerns about degree quality and degree transfer have led to increased attention to the students’ experience and outcomes.

But I think the reasons for the increased interest in Canada, while certainly mirroring events elsewhere, are quite home grown.

  • The monopoly is weakening. Colleges and other institutions from the non-university/AUCC sector are increasingly involved in offering university-level undergraduate experiences. The Alberta and British Columbia ”œuniversity transfer” model works too well to be ignored by the rest of the country. And there are new types of hybrid institutions offering a mix of nondegree and degree programming.
  • Governments in some provinces continue to look for more costeffective degree delivery models. For example, Alberta’s 2003 Bill 43 was a blatant attempt to allow colleges to offer full baccalaureates with no research component or, in fact, none of the traditional university ”œtrappings” and, consequently, at a much lower cost. Ontario is considering how to add 70,000 more undergraduate places over the next decade, and thanks to Clarke et al., the government may begin to view the current ”œmodel” of undergraduate delivery as financially unsustainable. And British Columbia has established ”œteaching-focused” universities.
  • Quality assurance is now on the agenda of national bodies such as the Council of Ministers of Education in Canada (CMEC) and the Lumina Foundation in the United States. Both have prepared degree outcome frameworks.
  • The public may have now been pushed too far by changes in undergraduate delivery that have been so well documented by the critics over the past 20 years. Quite simply, name or brand may not continue to trump the growing dissatisfaction with the student experience as much as it has in the past for some universities.

But, while all of the above circumstances are worthy of essays on their own, I think that the real source of interest in undergraduate education everywhere in the world stems from the view that improving the undergraduate experience is seen today as much more than simply improving teaching. The attention internationally is increasingly on ”œquality assurance” in undergraduate education and system design, which will lead to a student experience that will in turn lead to the highest levels of student learning. And there are growing bodies of research that approach the ”œhow” and the ”œwhat” of student learning from different perspectives.

Other than the various media ranking exercises, the two predominant outcome measures in North America today are the National Study of Student Engagement (NSSE) and the Collegiate Learning Assessment (CLA). In addition, many Canadian universities have used the Canadian University Student Consortium (CUSC) exercise for almost 20 years. The NSSE and CUSC exercises provide detailed information on the undergraduate student experience, while the CLA is a measure of student intellectual progress (my description, not theirs). While I am not aware of any studies that have examined the learning effects of CUSC scores, I know that many universities (my own included) have used CUSC data to inform institutional strategies. However, there has been more research on both the NSSE and the CLA, and both bodies of data suggest design strategies to maximize the student experience and learning.

The NSSE is clearly the predominant outcome measure. It has been around for over 25 years and has been validated by thousands of students and hundreds of institutions in the United States. But, most importantly, not only have the measures of student engagement been validated, but many studies have shown the link between high levels of student engagement and increased student learning and student success. And research is also showing which institutional design factors lead to better measures of the student experience. The NSSE will be a very powerful instrument in the consideration of system and institutional design.

The CLA is more recent but is gathering considerable momentum as well, largely due to Academically Adrift, by Richard Arum and Josipa Roksa, reporting on their research that measured the learning advancements of students as reported by the CLA over their four-year degrees at various institutions.

The CLA purports to measure the level of intellectual growth (critical thinking, complex reasoning and written communication), and Arum and Roksa showed that very few institutions engendered such growth in their students over the four years of an undergraduate degree. Apart from raising once again the question as to whether taxpayers are getting their money’s worth, the most interesting issue for the current discussion is just exactly what some institutions did that others did not that led to positive changes in this respect in their students.

There is a risk, as Robert Birnbaum (2000) suggests, that ”œwhen we cannot measure what we value, we value what we can measure.” But I suspect that at least the NSSE, and perhaps the CLA, will become the predominant publically accountable measures for Canadian universities over the next decade, replacing the traditional Maclean’s exercise as the go-to guide for students and parents.

Measuring the experience students have had or even measuring what they have learned during their undergraduate life is unquestionably important, especially if selected learning experiences can be shown to lead to higher levels of learning or to postgraduate success. But university educators are increasingly turning their attention to how students learn in undergraduate settings. In other words, how does the experience enhance the learning? In general, these studies are part of the growing field of the scholarship of teaching and learning. There is much work going on in this area across the country, but for the purpose of the present discussion on institutional design I will mention two authors: Richard Light’s studies of undergraduate students at Harvard University and Loren Pope’s Colleges That Change Lives.

I will not try to do justice to Light’s research and books in one paragraph. However, his work is one of the seminal efforts to try to link students’ undergraduate experience with their levels of learning, and he did it by asking the students themselves. The research is extensive, yet the nine or so recommendations for the redesign of the undergraduate experience are relatively precise. However, the most influential part of his work has been the development of the student assessment model, a process whereby any institution can use students’ feedback about their learning to redesign the undergraduate learning experience in the institution. While Harvard University uses this approach, as far as I am aware, Mount Royal University is the only Canadian university to do so.

Pope’s approach was similar to Light’s, although it was perhaps less academically rigorous. His goal as a university admissions adviser was to find the 40 colleges in the United States that made the most difference to their students in terms of success after graduation. He spent considerable time interviewing thousands of students at many institutions. I mention and recommend his work here for two reasons. The first is that his suggestions for the institutional design of ”œcolleges that matter” are remarkably similar to Light’s. And second, he coined the term ”œIvy League Scam” to refer to the fact that few of the large ”œbrand” institutions showed up on his list of institutions that seem to have made the largest learning impact on their students. Interestingly enough, the recent research on the CLA showed similar results.

And embedded in all of these studies of the ”œhow” of undergraduate education are changes in instructional technology in the past two decades. From Web-based course assistance software to entirely Internet-based learning, the delivery models are constantly being redesigned. But perhaps most important of all, the dramatic effects of the changing ways to access knowledge are reaching the learning environments in universities. With knowledge of the world as far away as the closest Wifi connection, the professor-learner relationship has been changed forever. And there are some system and institutional design implications of this changing studentknowledge-professor algorithm.

These are just several examples of the type of thinking that is starting to emerge from the growing scholarship of teaching and learning. Other notable studies include Susan Ambrose et al., How Learning Works, and G.D. Kuh et al., Student Success in College: Creating Conditions That Matter. There is growing evidence that the student undergraduate experience is changing, and that the experience can indeed be designed to maximize student learning.

Information from all of these sources will help guide institutional and system policy-makers in redesigning highquality undergraduate education. And each literature/research area will provide its own design prescription. However, the following could be some starting assumptions about the Canadian undergraduate system before we venture into the wholesale redesign.

Canadian universities are all different from one another. Any discussion of the state of undergraduate education in Canada must start from the understanding that Canada has a very differentiated group of universities at the current time. There are 95 universities in Canada (as defined by AUCC membership), and they range in size from the 60,000 or so students University of British Columbia, University of Toronto and York University to the 1,000 students at Algoma University. They range from very old, 190-year-old McGill University to the 2-year-old Mount Royal University. They range from very urban to very rural. Over half have less than 5 percent of their enrolment in graduate school (some have no graduate programs), and a few have over 20 percent of their enrolment in graduate school.

But more relevant for the purposes of the design of undergraduate learning is how Canadian universities are differentiated on these various outcome and process measures. At this point only the institutional CUSC and the NSSE results are readily available, and these are available only because they are selfreported. And the only way that the public can see these collective results is in the Maclean’s annual university ranking exercise. The preliminary evidence shows that there is a wide gap in NSSE scores between the topand bottom-ranked institutions. A good number of institutions are well above both the North American and the Canadian benchmarks. This suggests that, at least as far as NSSE (the student experience) measures are concerned, the undergraduate problem is not system pervasive and is very different in different institutions. While there are not yet any Canadian CLA data, it appears that the CLA scores in the United States show similar differentiated institutional results, with some institutions showing little or no progress and others showing considerable progress.

However, the most relevant part of this data analysis is that there are apparently institutional elements that lead to quite outstanding student success, no matter what the measure. Sorting out what some institutions do right might be a good starting point to help those institutions that appear to not be measuring up.

The crisis has to do with quality; it is not financial: It is not a secret that one of the major forces behind the possible redesign of the undergraduate delivery model is the opinion that the current model is financially unsustainable. And even if it is sustainable, most governments are still looking at ways to deliver undergraduate education in a more efficient manner. What all of the unsustainability pundits are really saying is that we need to reshuffle the research-instruction formula in the faculty workload. Less research and more teaching will push more students through the system at the same cost; lower the ”œcost per unit of production,” as economists like to say.

I would agree with the critics that there is a lot of evidence that the research-instruction relationship needs to be reexamined. The heavy emphasis upon the research role of faculty has changed the undergraduate instructional setting. Teaching loads have declined over the past two decades, while revenues have remained relatively constant. Something had to give, and it has been things like class size and the part-time/full-time faculty ratio. And I agree that this situation is not sustainable. But it is not for financial reasons, but for reasons of quality. Institutional scores on outcome measures such as the NSSE, the CUSC and the CLA show that institutions that are smaller and essentially undergraduate (less than 5 percent graduate enrolment) do extremely well. They have exceptional class sizes, acceptable parttime/full-time faculty ratios and faculty who are motivated to work in the classroom as well as the lab. Essentially they spend all the revenues they get for undergraduate students on undergraduate students. Most of the larger, more research-intensive institutions shift undergraduate resources to other levels, and the result is evident in the student outcome scores.

The system needs to be redesigned, not to save money but to make sure the money is spent in the right place for the right outcomes.

Public rankings and student satisfaction measures: In the absence of any other public activity, the Canadian media publish the input data and output performance data on Canadian universities. Up until the past couple of years the focus has been on ”œinput” variables as defined by the MacLean’s ranking, a traditional, now 20-year-old ranking of Canadian universities. While Maclean’s has changed the variables and the data-gathering methods over the years, the exercise largely reflects traditional university input measures such as research activity, national awards and budgets. However, more recently, even Maclean’s has started to report rankings on the newer measures of student experience such as the NSSE and the CUSC. And what everyone sees, but no one comments on, is the relationship between the two ranking exercises.

Figure 1 is a scatter diagram of Canadian universities graphed by research activity and one NSSE variable using statistics from the Maclean’s exercise. The data are not complete on all universities, and only two variables were chosen, so the exercise should be seen as heuristic, not conclusive. But the figure does suggest a few observations for the system design agenda.

First, as I have already noted above, there are many Canadian universities that excel on the NSSE and other student experience measures. But, from this figure, it is clear that institutional circumstances may account for the majority of the NSSE score. That is, the top left quadrant is largely populated with smaller, undergraduate universities, and they clearly excel on the student experience measures, perhaps simply because of their institutional circumstances. The bottom right quadrant is mostly the larger, research-intensive institutions (it includes all of the self-described ”œtop five” research institutions), and they score poorly on student experience measures for the same reason.

Second, there is a smaller group of universities that defy the relationship between size and research focus and score very well on the NSSE measure.

And finally, there is a large group of universities that do not perform well on either measure.

Most of this analysis reinforces the observation that there are institutional characteristics that can lead to improved ”œoutcomes” in the undergraduate student. But it also shows that these characteristics can be put in place by design, and do not just occur by default.

”œCherry picking” will not work: We seem to be facing a societal epidemic whereby policy-makers only read or believe the research that supports their opinions. Vaccinations, climate change and fluoridation policies come to mind. The issue of undergraduate system design is similar. The quick reaction to figure 1 and some of the new data on student outcomes might be to see the best solution as a system design that simply eliminates research from the faculty role and the institutional mission. Indeed, if you were simply to design a system to score well on the NSSE alone, this is exactly what you would do. The NSSE scores for teaching only or for focused institutions (community colleges and special-purpose universities such as Quest) are generally off the chart. Similarly, the 1996 meta-analysis by John Hattie and Herbert W. Marsh is often quoted to support the conclusion that there is no relationship between good instruction and research productivity, although these authors spent most of the decade after their original article was published in 1996 decrying the misuse and misrepresentation of their findings (as have many others). And then there will be others who will insist that there simply cannot be a university degree where faculty do not have research agendas as a priority role.

When all of the student success and undergraduate outcomes research is examined, the answer is somewhere in between. It is clear that student satisfaction outcomes are better in an environment where instruction is valued, but it is also clear that many learning outcomes (e.g., CMEC, the Lumina Foundation, the CLA would not be achieved outside of an active scholarly environment. For example, many of the smaller, undergraduate-focused universities can document the tremendous success that graduates have in their applications to graduate school, largely due to the active involvement of undergraduate students in faculty-led research and publication. So while it will be necessary to reexamine the prevailing faculty workload relationship between instruction and research, it is far from clear that a research-free environment would achieve the desired undergraduate outcomes.

While the research is becoming much more prescriptive regarding the appropriate system design to achieve certain outcomes, design initiatives must take care to not ”œcherry pick” from the outcomes research. To do so would certainly leave out something else of undergraduate value.

The common institutional factors: When examining all of the various institutional factors that arise from the work of researchers such as Arum, Pope and Light, apart from the phenomena of institutional default factors (i.e., the big versus the small), the design factors that lead to exceptional performance on the various undergraduate ”œquality” measures are surprisingly consistent. The following is my description of what such an institution in Canada would be like.

  • There is a balance between research and instruction in faculty workload that reflects the importance of the undergraduate student experience. In general, faculty view instruction as the primary (though not sole) component of their role.
  • The focus upon the student experience and learning outcomes is not a delegated task, but part of the institutional culture led from the ”œtop.”
  • Institutional and individual reward structures reflect the focus on student success and satisfaction.
  • The institution takes pride in its position as an institution with a focus upon undergraduate success and is not concerned with issues such as ”œtiering” or ”œmission drift.”
  • The student experience is viewed as more than what happens in the classroom, and many out-of-class activities are linked to the classroom learning goals. There is a shared responsibility for educational quality and student success.
  • There are very high and demanding academic standards.
  • Students often report a significant mentoring experience, usually performed by a faculty member.
  • Finally, institutions that perform well on measures of student learning have formal processes to engage the students in discussion regarding their experience.

There is increasing evidence that there are institutional and system factors that can affect the learning experience of undergraduates at Canadian universities. This evidence can come from input measures such as those presented by the traditional Maclean’s ranking, from student experience measures such as the CUSC and the NSSE, from output measures such as the CLA, from more traditional measures of student success after graduation or from the degree outcomes outlined by the CMEC or the Lumina Foundation. Which of these ”œmeasures” an institution decides is paramount will determine the design of the institution and, at the system level, will determine the degree of differentiation in models for the delivery of undergraduate education.

There appears to be considerable differentiation between Canadian universities on all of these factors at the current time, but it is not clear whether this differentiation is due primarily to contextual issues such as size, program mix and location, rather than purposeful design. However, it is becoming increasingly evident that there can indeed be ”œdifferentiation by design” when it comes to the delivery of undergraduate programming in Canada. Some of the newer universities that have been established with undergraduate-focused missions are doing just that.

However, in order to arrive at the appropriate design to maximize learning outcomes, it is necessary to change the approach to policy-making on this issue. The traditional government approach is based first on cost containment considerations, and only second on design considerations. This approach needs to be reversed. All policy-makers at the institutional or system level should consider first the priority measures, second the environment (design) that is necessary to excel according to those measures and third the cost of that environment. If the cost is unsustainable or unsupportable, then conscious and realistic decisions should be made about which part of the design and consequently which part of the measures are unattainable.

At risk is the status of our Canadian university baccalaureate brand, and perhaps the public monopoly on its delivery.

Policy guided by research. What a wonderful idea.