Catching Up With Volatility

by | Feb 3, 2013 | General | 0 comments

The last few years have seen significant volatility in the financial markets.  This has highlighted a basic issue with the popular simulation models used in financial institutions: models have a hard time catching up with volatility.

In other words, these models react slowly to changing volatility conditions, causing the risk metrics to be out of sync with the actual world, and potentially failing to predict significant losses with both business and regulatory repercussions.

It is not that the simulation methodologies are necessarily wrong.  Instead, it’s that they have limitations and work best under “normal” conditions.

Take, for example, historical simulation. Most banks for regulatory purposes use historical simulation to calculate VaR.  During this process a considerable amount of historical data is included to ensure a high confidence level for predicted values. Historical simulation, however, rests upon the assumption that there will be nothing new under the sun – an assumption that has been proving wrong since 2008.  Financial markets have – unfortunately – over the past three years set new records in volatility and free falling, at least in the context of the historical window typically included in such calculations.

Another popular simulation methodology is covariance-based Monte Carlo simulation where the covariance matrix is based on recent historical data. This is, again, limited by the events captured in the historical window and can also reduce the effects of extreme events. The covariance matrix can furthermore suffer from Simpson’s paradox: as correlations between risk factors during tumultuous times get reversed a covariance matrix based on those times can look like little or no correlation.

But there is help to be had:

If the issue is that the historical window is not including enough representative events, then the historical changes can be augmented with specific events or hypothetical scenarios. This might, however, require a more free-style method of simulation.

If issues arise because too much “irrelevant” history is included in the data used, thus drowning out the important events, then a shorter or more selective set of data can be used.

Choosing a shorter window can cause the confidence level of the results to decrease. However, if possible, switching to covariance-based Monte Carlo simulation can alleviate this effect and will not require more data.

If extreme events are either dominating or drowning in the covariance matrix, a solution might be to have multiple covariance matrices at hand and choose among them based on signal values in the data. This can also remedy issues with correlation reversal. Again, this should not require any new data.

A more costly, but also more accurate, method is to formulate statistical models for the risk factors. This allows for explicit modeling of volatility and how fast it should be incorporated in risk metrics.

Finally, choosing the methodology that is most appropriate for each risk factor is obviously an optimal approach if the resources are available.