In the News > Explainability: The Next Frontier for Artificial Intelligence in Insurance and Banking
Explainability: The Next Frontier for Artificial Intelligence in Insurance and Banking
January 6, 2021
Featured content
“Any sufficiently advanced technology is indistinguishable from magic”, argued the science fiction writer Arthur C. Clarke. Indeed, sometimes advanced technology, such as new machine learning algorithms, resemble magic. Evolving applications of machine learning, including image classification, voice recognition, and its use in the insurance and banking industries have seemingly otherworldly properties.
Many companies are wary of changing their traditional analytical models – and rightly so. Magic is dangerous, especially if it is not well understood. Neural networks and tree ensemble algorithms are “black boxes”, their inner structure can be extremely complex. At the same time, several studies [1] have shown how neural networks and tree-based algorithms can outperform even the most carefully tuned traditional insurance risk models constructed by experienced actuaries. This is due to the ability of the new algorithms to automatically identify hidden structure in the data. The mystery and usefulness of neural networks and tree-based algorithms are juxtaposed. There is an inherent trade-off between the accuracy of an analytical model and its level of “explainability.” How can we trust models if we cannot understand how they reach their conclusions? Should we just give in to the magic, sacrifice our trust in and control over something we cannot fully comprehend for accuracy?
Many companies are wary of changing their traditional analytical models – and rightly so. Magic is dangerous, especially if it is not well understood. Neural networks and tree ensemble algorithms are “black boxes”, their inner structure can be extremely complex. At the same time, several studies [1] have shown how neural networks and tree-based algorithms can outperform even the most carefully tuned traditional insurance risk models constructed by experienced actuaries. This is due to the ability of the new algorithms to automatically identify hidden structure in the data. The mystery and usefulness of neural networks and tree-based algorithms are juxtaposed. There is an inherent trade-off between the accuracy of an analytical model and its level of “explainability.” How can we trust models if we cannot understand how they reach their conclusions? Should we just give in to the magic, sacrifice our trust in and control over something we cannot fully comprehend for accuracy?
Share article: