Finance is, somewhat to my chagrin, not a science. This is in part a reflection of the complexity and chaotic nature of the system: a thorough, bottom-up understanding of how everything worked would include psychology, geography, and technology, to name but a few of the many, many factors that ultimately drive the markets. Moreover, even if that were possible, any successful model of the markets would soon have to include itself in its own predictions, and thus either be able to be simplified without being too significantly changed, or else be more complex than itself1. Neither seems particularly feasible.
This is the ideal; however just because the ideal is out of reach doesn’t mean that models can’t be useful. In practice, the way to do it is to take a statistical approach, and model summarized information (such as interest rates) as random variables. Done properly, this can provide sensible proxies, and is crucial in allowing for effective risk management. However it highlights an important philosophical point- whenever we model risks, we are doing so not from a first-principles understanding of the underlying physics, but from stochastic approximations. That emphasises that the further into the future we extrapolate, the less reliable our answers will be. We should keep our models as simple, as falsifiable, and as neutral as possible.
Next, let's take a look at time horizon. When using a risk model with a relatively short time horizon – say 1 year – we have a reasonably substantial body of data on which to base it, and against which to test it. However, for longer time horizons, we effectively have less data. For example, 100 years’ of data is 100 disjoint 1-year data points, but only 5 disjoint 20-year data points. As such, we would generally expect to be more confident that the dynamics of the shorter-term model were a reasonable fit, and be more confident of its input values. Because of compounding, we would also generally expect long-term models to be more sensitive to inputs2; and we would expect them to be more complicated. Using a model with a longer time horizon involves sacrificing some degree of simplicity and falsifiability; so why do people do it?
Ultimately, there are three legitimate reasons:
1) To account for significant external factors
2) The outputs of simpler models do not answer the questions that interest you
3) The behaviour of variables is fundamentally different for longer-term horizons
In the first case, a short-term model would not be appropriate, as it simply would not capture the appropriate risks. For example, in a DC scheme, we would expect the pension of a young member with limited savings to be more affected by salary growth and the size of future contributions than by the investment returns in the early years, and we would have to factor those in.
The second case involves a trade-off between applicability and reliability; again though, using a long-term model may be the best solution. Using the same example, we couldn’t simply use the 1-year risk profile of his or her assets as a measure of the risks faced.
The third option is arguably the most interesting; however you would have to be sure that the behaviour was different, and this is very difficult in finance (not least because of the “shrinking data” effect referred to earlier).
As an example, the graph below shows some sample paths of pot values from a DC scheme under Redington’s risk model. The continual contributions result in a heavily skewed distribution, and the risk profile is not adequately captured by a short-term risk model.
In my next blog, I will look at the potential pitfalls of long-term risk models, and how to compare the approaches.
1 Strangely, this sort of thing could be possible with quantum mechanics; I for one still would be sceptical of using it to invest my pension though
2 In particular, high expected return assumptions can mask the risk of significant drawdowns, and distributional assumptions will have far more impact.