A spectre is haunting the 21st century: the spectre of universal risk management. Once a term mainly used by insurance companies and actuaries, it has only recently become a favourite buzzword for everyone from health care professionals to stockbrokers. Rather as public opinion sampling rose from minor curiosity to oracular tyranny between the 1930s and the 1970s, risk management ”” roughly, the application of mathematical and statistical tools to the analysis of decisions made under uncertainty ”” has been making a less immediately visible but even more pervasive and imperious evolution, especially since the last years of the twentieth century.

Until recently little noted outside the financial media, risk management has become a vast empire of new aca- demic disciplines, computer software, consulting firms, new corporate enterprises, and Nobel-Prize-sanctified research studies. Like most grand ideologies, it does not have a single clear meaning, but several almost unrelated ones, with the more tendentious versions acquiring respectability from the most respectable ones. The overall reach of RM is now breathtaking. To provide only a partial list, it is now routinely applied to the design of hedge funds; to the study of endangered biological species; to assessment of new pharmaceutical products; to the possi- bilities of global warming. Terrorist threats are anticipated by it, and formulas for the allocation of defensive measures prepared. In the multi-billion dollar Enron collapse, RM even has its own splendid scandal.

The attempt to control risk by the use of mathematics can be traced as far back as the grain reserve calculations of Babylonian and Egyptian antiquity. Technique leaped forward in Renaissance Europe, when merchants started to understand the advantages of taking out marine insurance on their lucrative but highly uncertain ship cargoes. Steady advances in mathematical statistics since the 17th century broadened its scope and usefulness in both insur- ance and government demographic measurements. More new technique came with the development of the theory of games by the brilliant John von Neumann sixty years ago. Above all, computers, from their earliest arrival, brought immense increases in the breadth, depth, and speed of computation, inconceivable in the first half of the 20th century, and routinely leaping upward, year after year, in the second half.

In recent years, for bright young scholars with any flair for mathemat- ics, risk analysis has also offered sever- al attractions they were unlikely to find in fields of pure mathematics like abstract algebra or the theory of num- bers. Some of the ”œpurest” mathemati- cal studies have turned out to have surprising practical applications, an idea lately brought home to the gener- al public by A Beautiful Mind. But such applications are still the exception rather than the rule. In fact, even many major research lines in the empirical sciences do not necessarily yield new products and business enter- prises for many decades. Risk manage- ment, from the von Neumann era on, has always offered not only the prospect of working on big concerns like national defence and economic policy, but on the fascinating idiosyncrasies of the stock market, at once intellectually challenging and potentially lucrative. Furthermore, from the end of the 1960s, the study of risk was given a powerful new incen- tive by the decision of the Nobel Committee to begin offering Nobel Prizes in economics, very frequently for work as heavily dependent on com- plex mathematical argument as that used by the physicists.

Economists may like to believe that the prize was an entirely deserved salute to their analytical rigour and per- suasiveness, but it appears more likely that the change came about when some of their number impressed the Swedes by the use of more and more abstract and intimidating mathematical tech- nique. From the first award to Jan Tinbergen in 1969, presentations have strongly endorsed the work of theoreti- cal economists making use of recondite formulas from higher mathematics to construct models of society. Tinbergen himself was an ardent socialist who owed much of his intellectual formation to study under the physicist Paul Ehrenfest, who brought him into con- tact with Einstein and similar luminar- ies from theoretical physics. In the first two decades they were offered, the sequence of prizes indicated that the Nobel Committe was anxious to avoid giving the impression that their fond- ness for econometricians, nearly all socialist or left-liberal in inclination, was narrowly partisan: the awards to Tinbergen and Samuelson were quickly offset by others in the 1970s to Friedrich Hayek and Milton Friedman, and in the 1980s to George Stigler and James Buchanan. But balancing the ticket has not given the economics prizes the same kind of overall acceptability achieved by the natural science ones.

The major research of Tinbergen, Paul Samuelson, and a host of other highly mathematical economists, was not necessarily directed to risk man- agement in the narrowest sense, but by the last decade of the 20th century, several awards were going to the ”œrock- et scientists” like Merton Miller (1990) and Myron Scholes (1997), who had concentrated their attention on such topics as the development of a pricing formula for stock market options. As the econometrics of the mid-century attempted to rationalize socialism and social democracy, the new quantifiers aimed at giving microscopic precision to the new world of global capital markets, but without entirely escaping the rationalizing hubris of their econo- metric predecessors.

The 2002 award suggests another balancing act of a somewhat different kind. The United States and Europe now have vast numbers of additional rocket scientists following in the footsteps of the Nobel laureates. However, the glitter of those past medals was somewhat dimmed by the spectacular bust of long term capital management. LTCM was a deliberate attempt to create a sure-fire investment fund for affluent individuals and institutional investors that drew on a dream team of rocket scientists, possessed of unprecedented levels of math- ematical expertise, to arbitrage world stock and bond markets. The results looked terrific for a couple of years, but the crash was terrific, too. Furthermore, the blowup was not just a glitch in an otherwise steady ascent. Brought on by the Russian rouble devaluation, it was a reminder that even the most sophisti- cated mathematics is no protection from a world that refuses to unfold as it is supposed to.

Last year, the Nobel went to Daniel Kahneman and Vernon Smith, developers, along with the late Amos Tversky, of a whole new academic field called behavioural economics, which applies empirical studies of psy- chology to theories of how peo- ple make uncertain decisions. From its beginning, these empirical studies demonstrated some surprising things about how well-educated and supposedly rational people actually perceive risk. For example, questioning very dif- ferent test groups in Israel and the United States showed that these groups consistently made different choices of action when presented with alternatives that might be logically and mathematically identical, but with the alternatives presented with slight changes in the ordering and emphasis of the information provided.

Tversky, the main original thinker of behavioural economics, died of can- cer in 1996 when only fifty-nine. He was a brilliant Israeli, at home in both psychology and economics, whose own perception of risk was anything but abstract: he was a war hero familiar with decisions that could be a matter of life or death. He and his colleagues have identified a number of psycholog- ical quirks affecting decisions, like excessive preoccupation with ”œsunk costs,” asymmetries in perceptions of ”œregret” as well as reward, and others. Particularly notable, they found a con- sistent failure to recognize how many upward or downward turns in all kinds of numerical values are no more than random oscillations around an average.

Somewhat ironically, this new psycho- logically informed analysis was almost immediately added to the repertoire of calculation techniques used by the investment industry and the risk man- agement consulting firms, and has late- ly also been added to the vast number of popular books claiming to provide individual investors with the vital key to market understanding.

So beyond traditional use of the expression in insurance, ”œrisk manage- ment” describes the use of several mathematical techniques, and also several different kinds of investigation and activity, old and new. Beyond the insurance and actuarial use, plus rock- et science and behavioural economics, it also now commonly takes in:

  • The Theory of Games, descended from the World War II work of John von Neumann, and carried into the study of war and interna- tional relations by the brilliant mathematician and futurologist, Herman Kahn. It has lately become common for the media to identify Edward Teller as the origi- nal for Doctor Strangelove, but Kahn was the more likely candi- date. He horrified the meek and mild with the ruthless quantitative Machiavellism of his classic works, On Thermonuclear War and Thinking About the Unthinkable. But once his ideas moved out of the think-tanks and into Ivy League political science departments, the abstraction tended to be preserved, while his intellectual clarity and absence of jargon vanished.

  • Hedge Funds. Only a few years ago, this term most often referrred to the large and secretive offshore investment pools like the Quantum Fund of George Soros, whose ”œhedging” mainly took the form of making several simultane- ous huge bets on currency move- ments. Prohibited to small investors and financial institu- tions, they have lately shown they can stumble in much the same way as more prosaic punters. Now there is a spreading proliferation of smaller and theoretically more conservative hedge funds, grow- ing so rapidly that several rival ”œhedge indexes” have also appeared on the scene. These less ambitious hedges mostly use the the original device invented by Alfred Winslow Jones half a centu- ry ago, of ”œinsuring” stock pur- chases with offsetting short sales or derivatives whose prices move in the opposite direction from the original purchases. The handful of such hedge funds that once exist- ed used to find profit niches pre- cisely because no one else had noticed them. Now arising in large numbers as replacements for tot- tering mutual funds, they resem- ble an army of tourists heading for the same beach on the same day in pursuit of privacy.

  • Computer Software. Some or all of the mathematical techniques and investment strategies have been combined and packaged into soft- ware programs for the use of cor- porations and banks. Executives of these can now be presented with a measure called VaR, or Value-at- Risk, involving various manipula- tions of the standard deviation over time of the prices of a portfo- lio of investments. It gives a con- venient single number, but with many weaknesses. For example, the VaR might be equal for a bank’s two bond trading desks in different geographical areas, although competent bank execu- tives would intuitively and cor- rectly see far larger possible dan- gers in one than the other. The most likely software they would be using is a regulator-approved one called RiskMetrics. RiskMetrics was developed and marketed by J. P. Morgan. Examination of the current state of J. P. Morgan’s loan portfolio suggests they didn’t know as much about risk as they thought they did.

The best of the mathematical economists and hedge fund designers are undoubtedly individuals of sub- stantial intellectual talent. But the gen- eral use of mathematics in risk management is not a guarantee of particularly reliable conclusions. It is not even an invariable cor- rective to more intuitive ways of interpreting information. In many economic applications, mathematics is being used by both academics and business- people in ways that suggest they have given little thought to the general assumptions they are using, and just how far mathematical models of all kinds can be used to map the world of experience.

Few modern physicists worry much about past mistakes and blind alleys, often the product of apparently bulletproof mathematical exposition. So long as theories generate new questions and interesting new research possibilities, they mostly charge on, confident that their studies will keep providing better and better results, both in per- suasiveness and practicality. But that is largely because they have spent four centuries tracing out the impli- cations of the grand premise announced in the very title of Newton’s greatest work, Principia Mathematica Philosophiae Naturalis (”œThe Mathematical Principles of Natural Philosophy”). Like the American Declaration of Independence of a century later, this title is not just an identification, but a manifesto in its own right. Aristotelian physics, omnivorously concerned with empir- ical experience of all kinds, substan- tially advanced by mediaeval monks and not totally dead in Newton’s own time, was qualitative, logical and ver- bal, making no link between celestial astronomy and the homely gravita- tional urges of earth and water. Newton was not just writing a bril- liant monograph advancing an exist- ing discipline. In completing the work of predecessors like Galileo and Kepler, he was defining physics as mathematical.

His arguments had overwhelm- ing force and transformed the world because he could show the moon, the tides, distant rotating stars, the apple falling from the tree, all doing so according to a simple mathematical law, so that a measurement of the apple’s fall could be used to predict the path of the moon across the sky and vice-versa. In the following three centuries, similar empirically testable theories were developed to explain heat, light, electricity, and somewhat more baffling sub-atomic particles. No comparably all-embracing and satisfactory mapping of mathematics on empirical reality has ever been carried out in any of the other well- defined natural sciences, even if they all now use a great deal of mathemat- ics. That is scarcely surprising: so far as something in the world of experi- ence can be completely described by mathematics alone, it is part of physics.

Mathematics is otherwise a mental creation, capable of generating count- less interesting patterns (n-dimension- al spaces, for example) that may not exist anywhere in the physical uni- verse. The mental origins of this vast range of possibilities have not always been grasped by natural scientists, physicists included, and they most cer- tainly are not understood by immense numbers of human beings who make daily use of mathematics, more and more with the aid of calculators and computers. There is no necessary con- nection between familiarity and under- standing. Young schoolchildren of moderate mathematical ability, even without computers, can today readily solve division problems that would have stumped Leonardo da Vinci, since the Renaissance did not yet have simple and convenient ways of writing fractions or decimals. But that doesn’t mean the children are actually ”œbetter mathemati- cians” than da Vinci, only that they now can be trained in the use of better tools.

In modern physics, for a full century now, the empirically observed behaviour of particles has been revealed to be very much at odds with ordinary human notions of space, time, motion, and their conformity to canons of reasoned thought. Mathematics had been an indispensable tool to physicists ever since the age of Newton, but from the era of Planck, Rutherford, and Einstein, it became a great deal more than that. For many developments in modern physics, mathematics is the only lan- guage in which any kind of explana- tion is even possible. But even the physicists have had some profound dis- comforts and disagreements about just how to plug the strange entities they study into equations. Some of the most capable experimental physicists of the century, with Ernest Rutherford as the most notable example, tended to regard the appearance of very large amounts of mathematical exposition as dangerously close to empty or specula- tive verbalism.

The theoretical physicists, espe- cially those knowing something of the foundations of mathematics, were uncomfortably aware that the experi- mentalists had good reasons for their doubts. They understood that even the tiniest and simplest particle is some- thing very different from a geometric point given an a priori definition by the human mind.

The theories of quantum mechan- ics that began to develop in the 1920s and 1930s also required physicists to abandon the mathematical exactitude of classical mechanics for statistics, much to the disgust of Einstein, who held on doggedly to a more determinist concept of nature. He did not win over many of his fellow physicists, especially in later generations, but that was because they found statistical interpre- tation unavoidable, not because it made them very happy.

Ernest Rutherford once gave a clas- sic demonstration of the fallacious conclusions that could follow from the seductive Platonism of mathematics applied to physics. To the very end of the 19th century, some distinguished natural scientists were still resisting both the evolutionary biology of Darwin and the geological theories of Charles Lyell. This resistance was not entirely caused by attachment to Biblical literalism. Victorian physicists were confident they had quite exact information about the sun: its mass, its composition, its distance from the earth, its surface temperature, and the amount of radiant energy with which it was warming the earth. They were also by then equipped with the physi- cal theory of thermodynamics. Since the sun could be regarded as some kind of gigantic furnace, it appeared possible to make calculations of just how long the furnace had been burn- ing. The famed Lord Kelvin made just such a calculation, and concluded that the sun could not be much more than about 40,000 years old. According to the much longer time spans required by the theories of Darwin and Lyell, it seemed the sun would long ago have burned itself to a cinder. Kelvin and his admirers found his calculations more persuasive than anything that Darwin and Lyell could claim about what this corner of the universe had been like billions of years ago.

Shortly after Rutherford had won his own Nobel (in chemistry, some- what to his amusement) for his work with Frederick Soddy at McGill on atomic disintegration, he gave a lec- ture in England in which he advanced the idea that the sun was indeed a fur- nace, but a thermonuclear one of con- stant combination and recombination of hydrogen and helium atoms, which could run for billions of years. He also noticed that Lord Kelvin had slipped into the lecture hall, and was careful to slip in a gracious tribute to Kelvin’s cal- culation as a fine work in terms of the data then available, which not only got a pleased glow out of Kelvin, but probably did no little to smooth the way for the acceptance of the newer interpretation.

Rutherford continued to retain scepticism about a great deal of math- ematical and theoretical physics throughout his whole lifetime, even including some of the work of Einstein. While his work is still greatly admired, most contemporary physi- cists would probably agree that he went too far in the direction of mathe- matical agnosticism, but the science of the present could still probably profit from a good sprinkling of researchers of Rutherfordian temperament in all fields. To be sure, the riot of computer models now found in almost every sci- ence include several that are not only intellectually fascinating and aestheti- cally pleasing, but compellingly per- suasive as new contributions to understanding of the real world. For example, mathematical biologists have been able to use multiple iterations of very simple formulas to show how ani- mals could develop spotted bodies and striped tails, or to construct three- dimensional representations of such surprising things as clusters of lilacs.

For that matter, computer model- ling is breaking down walls between different disciplines in general; some of the ”œrocket scientists” of the stock market have been actual rocket scien- tists, and bright young polymaths at the Santa Fe Institute ramble joyfully from cosmology to methods of accounting. But even such thorough- going experimentalists as Rutherford did not deny that mathematical pat- terns may be the best devices available to make sense out of the external world. What he was worried about is that it is too easy to find many such patterns that only apply in very limit- ed regions, and indeed that a lot of good science is about getting these regional limits right, not in carrying out the mathematical calculations themselves.

RM does not necessarily work bet- ter as it absorbs more mathematics, nor are its most ambitious applications being trimmed by Darwinian market competition, any more than this used to be very much the case for Marxism or Freudianism, and for much the same reason. Like those once towering structures of interpretation, RM is capable of generating an unlimited body of refinements, improvements, novel applications, solemn internal debates, and excommunication of heretics, all with a blissful indifference to repeated falsifications from the world of experience.

Unlike them, it provides the addi- tional psychological compulsions of seductive charm for mathematical initiates and intimidating bafflement for the laity, conveniently packaged for both in computer programs. Academics, consultants, executives, and bureaucrats ought to be persuad- ed to relabel it LFG ”” Latest Fancy Guesswork ”” and constantly remind themselves that submission to sys- tematic interpretations of reality has been, and always will be, the riskiest business of all. 

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License