A Guide to Model Monitoring and Maintaining Predictive Health

Predictive analytics models are extremely valuable organizational assets; they are central to making critical business decisions – whether it is identifying new opportunities or evaluating potential risks. A model that is outdated or underperforms puts decision-makers and their decisions at severe risk for accuracy and reliability, which can ultimately effect bottom-line profitability and cause damage to a business. One of the most interesting challenges facing financial services professionals today, is the ability to understand at what point in the model management lifecycle, models need to be updated to maintain their predictive sharpness. Because predictive analytics models do not “self-update”, it’s important to have a systematic model management lifecycle process in place.

The Importance of a Systematic Model Management Lifecycle Process

Predictive analytics models use data gleaned from past experiences and events to identify future risks and opportunities. Conditions and environments are constantly changing and thus need to be reflected in the models, otherwise model performance will decay over time.

As organizations rely on predictive models on an increasingly larger scale, establishing a consistent, systematic model management lifecycle methodology is of paramount importance. A typical model management lifecycle consists of data management, modeling, validation, model deployment and model monitoring.

Monitoring Process

The Key to Maintaining Predictive Health – Model Monitoring  

Model updating and calibration are part of any organization’s model lifetime management framework but it takes discipline, resources and expertise to monitor models effectively. In order to measure a models effectiveness and performance levels, simple statistical analysis and random spot checks are not the best approach. Models should be thoroughly evaluated and measured for accuracy, predictive power, stability over time and other appropriate metrics that are defined by the model’s objectives. A model that lacks these characteristics, will decrease the ability to make confident business decisions and impact business outcomes across an organization.

Best practices requires predictive models to be regularly monitored and as part of this process, model performance should be tested by analysts who were not involved in estimating the models, who do not have an interest in model outputs and who can really challenge the model outcomes.

Monitoring Frequency of Models:

So, how often does one have to worry about monitoring models to determine if the models need to be recalibrated or rebuilt?

This time the answer is not so simple, because there are many data, business environment and model usage factors to take into account, as well as business constraints and limited resources. Ideally models should be monitored as regularly as possible.

However, establishing a consistent model evaluation and update methodology, eliminates the need to rethink if and when your models need attention. When setting up a monitoring frequency methodology, the following factors should be taken into account:

  • How often does the business environment change?

Certain environments are subject to constant changes while other environments undergo very minimal change. The more frequent the changes are, the more often the models will be affected and run the risk of being out of touch with the current environment. In such cases, it’s crucial to realign the model to the current environment and by doing so sharpen its predictive power.

Within the insurance industry, certain business units are more prone to need frequent monitoring of their model performance. Let’s take the motor insurance business as an example; when comparing the monitoring frequency needed for renewal vs. origination models, it’s clear that new business origination models need to be monitored more often. This is a result of the high level of competition in new business and the frequent pricing changes that accompany it.

  • How often decisions are made based on a model’s output?

Models have to reflect reality to produce output that is timely and relevant for making decisions. If a models output is only used sporadically, then there is no need to update the models just to tick a box. For example, if models are used to generate figures for a balance sheet that is only updated quarterly, there is no need to monitor the model results more than once a quarter.

  • Is the model new?

At first, when a model is new, it is prudent to monitor the model’s performance more frequently until you feel confident in the results being produced by the model. It is expected that as more out-of-sample data comes in from field use of the model, there will be a stage of refinement. This is an important part of the evaluation process, as it provides a way to test model accuracy and to evaluate how the model might perform given new data sets.

  • How fast can data be collected and developed for monitoring?

Often the data needed for monitoring models is not available. It’s important to account for this in the monitoring process so that enough time is left to collect and develop data in order to have credible analysis and results.

The following may impact data readiness:

  • Sporadic data points: model monitoring requires you to accumulate enough data to make sure the tests are credible. In certain cases a particular phenomenon only occurs occasionally, for example in claims above a certain excess, and therefore it takes time to collect enough data points for analysis.
  • Data collection from external sources/ 3rd parties: When collecting data from 3rd party agents or other types of intermediaries, the process of obtaining the required data usually takes time and is available at less frequent intervals.
  • Data development: in certain circumstances, you may need to wait sometime after the end of the period that the models were used in order to let the data develop. This is a well-known phenomenon in claims analysis, as sometimes it takes a significant amount of time until claims are reported and the claims amounts stabilize. This is also applicable to renewal and origination modeling as, for example in loan and mortgage insurance origination, insurers and banks give the customers several weeks from quote date to make a decision.
  • What are the potential risks if models are not updated?

One final element to consider regarding the monitoring frequency, is the potential risk of not having a fully calibrated or newly rebuilt model. Incorrect models can result in a number of damaging consequences including severe financial loss, poor decision-making and could destroy an organizations reputation. A well-known example of such a case involved the heavy reliance on a formula result without questioning if it actually worked in specific conditions. Feel free to read about this example in the story found in The Guardian (The mathematical equation that caused the banks to crash)

While the above monitoring frequency is determined by events that impact the models on a regular basis, it should be noted that if an internal or external event occurs of greater magnitude, the models should be monitored more closely. These events tend to have a greater impact on the models. Internal events could include changes to the product, underwriting, distribution channels and processing procedures, while external events could include changes in legislation, regulation and customer behavior.

The Model Updating Process

Those models that are not running at their best may need to be recalibrated with the current environment or rebuilt completely. What now, how do you move forward?

  • Recalibration of Models

In the case of model recalibration, the basic assumptions of the original model are still valid. Now it is a matter of gathering and analyzing new data, re-fitting the models and testing the outcomes to make sure that they are in line with the current environment.

  • Rebuilding of Models

In the case were a model has to be rebuilt, the main factor to address is the original assumptions of the model. Once the assumptions have been updated and redefined the same process has to be followed as when you built the original models, including data collection, model building and testing of results.

As you can see, it’s vital to monitor predictive analytical models and update them when needed.  Sometimes all it takes is a bit of fine tuning or recalibration, sometimes there is no other option but to rebuild them from scratch.  Whatever the final outcome is, it’s clear that the monitoring frequency is influenced more from changes in the business environment than from pure statistical need. Taking into account the resources at your disposal, the best way to go about dealing with monitoring models is finding the point in the risk/frequency trade-off scale that you feel comfortable with, and take action!

Now, let’s hear from you – how often do you monitor your analytical models and why have you chosen that frequency level? Feel free to share your thoughts in the comment field below.