Earnix Blog > Analytics

AI & Machine Learning Explainability - Unlocking the Black Box

Luba Orlovsky

January 15, 2024

  • Analytics
  • AI

In the bustling world of insurance, a quiet revolution is unfolding. It's not led by charismatic CEOs or flamboyant industry disruptors. Instead, it's powered by something far more enigmatic: machine learning (ML). These complex algorithms are the new oracles of the insurance industry, predicting everything from who's likely to file a claim to what premium consumers should pay.

But there's a catch – much like Pythia, the Oracle of Delphi, or Warren Buffett, the “Oracle of Omaha,” these oracles are often inscrutable, their predictions shrouded in mystery. This is where the concept of machine learning explainability enters the discussion, turning the spotlight on the inner workings of these digital soothsayers.

The Enigma of the Algorithm

Imagine you're applying for car insurance. You fill out the forms, answer all the questions, but the quote you receive seems unusually high. You might wonder, "Why?"

This is where many hit the wall of algorithmic opacity. The calculations that spit out your proposed premium are buried deep within layers of data and code, a labyrinth few can navigate. But should we accept this digital decree without question? The push for ML explainability says “no.”

The “Why” Behind the AI

Explainability in ML is about peeling back the layers of these complex systems to reveal the “why” and “how” of their decisions. It's a bit like asking a chef to reveal the recipe to a secret sauce. In the insurance industry, this transparency is not just a matter of curiosity; it's a matter of trust and fairness.

For insurers, explainable machine learning can be the bridge between innovation and customer confidence. When customers understand how their data is used and why certain decisions are made, trust grows.

For instance, if a health insurance application is denied, a clear explanation can ensure the customer knows it's not arbitrary, but based on understandable factors such as business rules or regulatory strictures.

The Human Touch in a Digital World

Explainability also returns a human touch to an increasingly automated process. It allows insurance professionals to review and understand the machine's recommendations, ensuring that they align with ethical and legal standards. This human oversight is crucial, as it ensures that ML aids, rather than replaces, human judgment.

The Road Ahead

The journey towards full machine learning explainability in insurance is not without its challenges. There's a delicate balance to strike between simplicity and accuracy.

If they are too simple, the explanations might not fully capture the decision-making process, leaving consumers still wondering what happened. Too complex, and the explanations become as impenetrable as the algorithms they aim to illuminate, not answering the basic questions, and defeating the purpose of the exercise.

Regulators, too, are joining the conversation, recognizing that as ML takes on a larger role in critical industries such as insurance, transparency isn't just a “nice to have,” it's a must-have.

Initiatives like the EU's General Data Protection Regulation (GDPR) are already paving the way, giving individuals the right to understand how their data is handled by automated systems.

Different Approaches to Explainability

With the growing importance of explainability, and a desire for fostering good customer experiences (CX), analytics professionals are devising ways to put forth explanations for machine-driven decisions in ways that laypeople can understand.

Here are just a few that have gained popularity to date:

LIME (Local Interpretable Model-agnostic Explanations)

LIME is an approach that helps to explain the predictions of any classifier or regressor in a faithful way, by approximating it locally with an interpretable model.

For instance, if an insurance company uses a complex model to predict the risk of car accidents, LIME can create a simple explanation for an individual prediction by perturbing the input data and understanding how the predictions change.

SHAP (SHapley Additive exPlanations)

SHAP values are based on game theory and provide a way to measure the contribution of each feature to the prediction. In the context of insurance, SHAP can be used to explain the output of a model that predicts the likelihood of a policyholder filing a claim, for example, by showing the impact of each individual policyholder's characteristics on the model's output.

Feature Importance

This is a technique used to identify which features are most influential in a model's predictions. For example, in a Random Forest model used to determine insurance premiums, feature importance can reveal whether age, driving history, or vehicle type is most significant in calculating the premium.

Partial Dependence Plots (PDP)

PDPs show the relationship between a feature and the predicted outcome, holding all other features constant. In insurance, a PDP can help to visualize how different levels of coverage or policyholder age affect the price of an insurance policy.

Counterfactual Explanations

These are explanations that tell a user how to obtain a different decision by altering certain inputs. In an insurance setting, if a customer is denied a claim, a counterfactual explanation could indicate what factors might need to change for the claim to be approved.

Global Surrogate Models

These are interpretable models that are trained to approximate the predictions of a “black box” model. For example, a decision tree could be used as a surrogate model to interpret a more complex ensemble model used for predicting insurance fraud.

By integrating these explainability approaches, insurance companies can provide more transparency to their customers, allowing for a better understanding of how decisions are made. This not only builds trust, but also ensures that the models are being used responsibly and ethically.

Productizing Machine Learning Explainability

This blog post lays out some of the reasons behind using ML explainability in your work.

To make explainability more readily accessible, it is available as a feature in the Earnix solution, and you can contact us to arrange a demo. The capability in Earnix goes beyond standard implementation of traditional approaches, fitting the algorithms to the needs of the insurance industry.

A Future Defined by Clarity

As we stand on the cusp of a new era in insurance, one thing is clear: the future of ML in this industry will be defined not just by the sophistication of its algorithms, but also by the clarity of its explanations. The companies that can demystify their digital oracles will lead the way, fostering an environment in which trust and innovation go hand in hand.

In the end, ML explainability isn't just about making algorithms transparent; it's about ensuring that the future of insurance is as much about the human experience as it is about technological advancement.

After all, at the heart of every policy, every claim, and every premium, there's a person seeking not just economic surety, but understanding.

And in a world where AI holds the keys to so many doors, explainability is the light that leads us across the threshold.

Share article:

Luba Orlovsky

Earnix Principal Researcher

About the author: