In computational science, deep learning probably is one of the most heralded techniques of present time and recent history, mainly due to its versatility and impressive achievements likewise. Indeed, applications of deep learning range from beating the (human) world champion of the highly complicated Go game to the promise of deploying self-driving cars in the near future, on a large scale and all over the world.
Deep learning (DL) pertains to the field of artificial intelligence and is great at extracting and mastering the often highly non-linear patterns of a given process, whatever this process might be. The only main requirement is the availability of a large amount of data that describes the behavior of the process under different conditions and a truckload of computational power. However, since the price of data storage and the effort of sampling data has dropped dramatically over the last years, and since Moore’s law on the increase of computational power does even nowadays not show any signs of a slowdown, fitting deep learning models that are able to produce extremely useful predictions are a reality and this already for some years now.
In other words, the time is high to also deploy this amazing technology in the insurance industry! However, the methodological framework that underlies this amazing technology is somewhat different from the statistical one that we’ve all grown accustomed to (mainly through our general love for GLM models), and the computational horsepower, needed to merely fit these models, is of an order of magnitude higher than the one needed to fit the classical statistical models.