MODEL REVIEW AND VALIDATION

In 2012, the “London Whale” trades resulted in losses of $6billion. JPMorgan were widely renowned as among the best managers of banking risk in the world, and had navigated the financial crisis far better than most; yet these losses resulted from inappropriate use of a risk model that seems not to have been fit for purpose.

There was nothing new in this. In 2007, David Viniar of Goldman Sachs famously described seeing “25 standard deviation events, several days in a row”. To put this in perspective, for a normal distribution this is equivalent to buying one ticket a day and winning the lottery every day for three weeks. In fact, a 25 standard deviation event has a probability of 10-135, whilst the number of particles in the universe is estimated at around 1080. The age of the universe, in days, is a paltry 1013.[1]

Now, Viniar was cutting his bank’s holding in US residential mortgages back in 2006, so we can assume he wasn’t fooled by the outputs of a faulty risk model. But his comments highlight how starkly unfit for purpose some models were, and many of Goldman’s rivals fared far worse in the financial crisis.

It is not hard to see why though: the world is complicated. Constructing good models is a fundamentally difficult process; so is testing them rigorously. It is imperative to have a good process in place for validating any risk model- ideally, and for important models, it should be done by an external third party, who can offer both visible objectivity and a fresh perspective on the underlying assumptions and methodologies.

Moreover, the whole regulatory landscape is changing, with the introduction of new legislation such as Basel 3, Solvency 2 and EMIR; and model risk management is a hot topic. In fact, the Federal Reserve has taken the issue sufficiently seriously that they have already issued guidelines on model risk management and model validation. These guidelines are both detailed and wide-ranging, identifying principles that apply to any user of an ALM model.

The crucial question for a model validation is ultimately this: “do you know when and how you would expect this model not to work?” To answer this question, the guidelines drill down further, and require that any model validation must address 3 key areas:
 
 1-   Conceptual Soundness– does it make sense? This includes a thorough review of the assumptions and methodologies employed, as well as the data set used; it also involves identifying where the key assumptions are useable and where they are not.
 
 2-   Outcomes analysis– do the outputs capture the behaviour seen in the real world? In principle this is simpler; in practice it may be harder (than the conceptual analysis). However, it is clearly necessary, and has the ability to reject a number of poor models with a clear rationale.
 
 3-    Ongoing monitoring– does it still make sense (the guideline is to re-evaluate the model at least once a year)? As the market changes, or as new assets and liabilities come into the portfolio, the model may no longer be fit for purpose. For example, JPMorgan’s models were appropriate for small credit positions, but the model ceased to be viable when the bank held so much of the whole CDS market. Markets change over time, so models should change as well.
 

It is important to note that model review and validation is not a purely negative process, and it can have a lot of upside potential as well. For example, consider a bank’s economic capital model for its pension scheme: a thorough review could lead not just to better risk monitoring but also to investment strategies that were more efficient with respect to Tier 2 capital. And model review is clearly a vital part of any sensible model risk management policy. A clear, simple set of recommendations on when and how to use the model, and when to stop using it, should make it far easier to avoid running sizeable hidden risks. The argument in favour is very clear.
 
So the question I put to you is this- “how has your model been validated?” Or, better yet, “do you know when and how you would expect your model not to work?”
 
 

[1] This is somewhat harsh- since information arrives discontinuously, the normal distribution cannot apply over short horizons, but that does not mean it cannot apply over longer periods; the point is that the model is only useful if the person using it understands that.

 

Please note that all opinions expressed in this blog are the author’s own and do not constitute investment advice. Click here for full disclaimer

 

Author: Alex White

Alex joined Redington in 2011 as part of the ALM team. He is Head of ALM research, which involves projects such as: proactively modelling new asset classes and strategies, building and testing new models as needed for new business lines as well as a continuous review of current models and assumptions used. In addition to this, he designs technical solutions for clients who may require a bespoke offering to better solve the problem they are facing. Alex is a Fellow of the Institute and Faculty of Actuaries and holds an MA (Hons) in Mathematics from Robinson College, Cambridge.