Machine learning has permeated almost all areas in which inferences are drawn from data. The range of applications in the financial industry spans from credit rating and loan approval processes to automated trading, fraud prevention and anti-money laundering. Machine learning has demonstrated significant uplift in these business areas and its use will continue to be explored in the financial industry.

Nevertheless, there is one area in which machine learning has not (yet) contributed too many innovations: Time series analysis for financial risk measurement. The main reasons for this are rooted in the observation that financial time series are very noisy, not stationary and often very short. Therefore, traditional machine learning algorithms (like e.g., long short-term memory) simply do not find enough data to draw any relevant conclusion.

The first issue relates to the low signal-to-noise ratio usually encountered in financial markets data. This aspect is closely related to the danger of overfitting: Due to the large noise component, the algorithm might focus on the irrelevant noise patterns instead of the real signal. The second issue relates to a similar fact: Financial time series frequently change their local volatility, but the algorithm is always far behind. A clear indication of overfitting is given by a good performance of the algorithm applied to the training data versus a deteriorating performance when applied to new data. What can be done about this?

“If we are willing to feature more general but also more complicated approaches, the so-called Particle Filters provide good service for problems in financial time series analysis”

In financial risk measurement there seems to be a quite handy solution, which has been part of engineering toolboxes for more than half a century. The Kálmán Filter provides a clever way of separating the signal from the noise by using an adaptive way of averaging over successive observations. Due to its “online” character, the Kálmán Filter is very fast and flexible. Applied to financial time series the signal is the local volatility and the noise is just the remaining component after we conditioned on the local volatility. The clever adaptation of this filter to changing environment could be thought of as an “ancient AI”. What about more recent developments which improve Kálmán Filter techniques?

If we are willing to feature more general but also more complicated approaches, the so-called Particle Filters provide good service for problems in financial time series analysis. Even though these algorithms are more time consuming and even though currently there are only a few open-source libraries, Particle Filters provide good perspectives to be future benchmark tools. But haven’t we forgotten something?

Of course, risk measurement in financial institutions always carries a regulatory dimension. Therefore, machine learning / AI innovations used for Basel Pillar I or Pillar II purposes need to be explainable and interpretable. We just cannot use some deep neural network approach that may come as a black box solution, even if it has a superb backtesting characteristic. Since the Kálmán Filter and Particle Filters share some characteristics of state space models, interpretability seems straight forward to tackle.

To deliver proof of concept a good strategy is to implement explainable AI techniques like the above-mentioned approaches as “challenger models” within a risk model validation framework. If these methods prove themselves here, there is a lot of evidence for their transfer to “champion models”.