Interpretable Machine Learning Models | HCL Blogs
Type to SearchView Tags

Interpretable Machine Learning
Kishore Joseph - General Manager, Solutions, HCL Financial Services | November 29, 2019
56 Views

What is interpretability? Why does it matter in machine learning?

Consider a common use case for machine learning-based loan approvals. Some companies have been using machine learning to approve loans for borrowers, based on their ability to repay the loan. In some cases, the loan does not get approved, based on predictions made by the machine learning models used. Such companies would be hard pressed to provide explanations especially when there is a pattern observed, where loans are being rejected, to a certain class/minority of people.

One of the big disadvantages of using machine learning is that the data-related insights and the tasks that the machines are solving are hidden within complex models. For instance, the Random Forest implementation consists of hundreds of decision trees that vote to make predictions. To fully understand why/how the decision was made, we would need to look at the votes of each decision tree, which is not easy.

Again, ensemble models are a blend of different machine learning models and understanding how these models make predictions is quite cumbersome. Even if we are able to understand each individual model separately, getting to understand how these models work in tandem is quite difficult.

What is interpretability and why does it matter in machine learning? Here is a thought on interpretability and decision making using machine learning. Click to read @hclfs #machinelearning #machineinterpretability

So, what is interpretability? Simply stated, it is having the ability to confidently tell how/why a machine learning model is making the decisions it was programmed to make.

Why it is important? We can predict that some customers may no longer want to be associated with a brand. We would therefore want to know why such a prediction was made, as this will assist in understanding the possible reasons as to why the customers are leaving a brand.

In another case, say while browsing for an item in a retail website, we often see a ‘Frequently bought together’ section. Sometimes, it shows unrelated items that seem to have been bought with the product on display, by other customers. We would like to know why this was the case and explore the connections therein.

Let’s take another example. We are all aware of self-driving cars. They can detect onward vehicles, cars, motorbikes, and cycles, among other vehicles, to avoid collision. In the case of a bike, a possible explanation could be to look for anything that had two wheels, but how do you explain, decisions that are made, if the wheel is partially covered by someone sitting in a saree, or a has a side car attached, among other factors.

Here, in my view, lies the importance of interpretability. Also, only by interpretability, we mean machine learning models can be audited. Having an interpretation for a wrong prediction would help identify the possible causes, following which, suitable fixes can be implemented.

In other reasons, as to why interpretability is important, Finale Doshi-Velez and Been Kim, in their paper “Towards A Rigorous Science of Interpretable Machine Learning” published in 2017, have listed the following traits that can be checked more easily:

  • Fairness: Making sure the predictions are unbiased and not discriminating against protected groups (implicit or explicit). An interpretable model can tell you why it decided that a certain person is not worthy of a credit and for a human it becomes easier to judge if the decision was based on a learned demographic (e.g. racial bias
  • Privacy: Ensuring that sensitive information in the data is protected
  • Reliability or Robustness: Test that small changes in the input don’t lead to big changes in the prediction
  • Causality: Check if only causal relationships are picked up. Meaning a predicted change in a decision due to arbitrary changes in the input values are also happening in reality
  • Trust: It is easier for humans to trust a system that explains its decisions compared to a black box.

Is interpretability required for all machine learning models? There are cases where interpretability is not required, especially in cases where there is no impact. This could be in cases where a problem is extensively studied and with a lot of practical experience in using the model, the problem is identified and fixed. For example, machine learning is used to identify the pictures of cats and dogs. These models have been in existence for quite some time and have been understood well. So, no further insights are to be gained from such models.

What would be the scope of interpretability? If we consider the typical machine learning model, an algorithm trains a model, and the model produces predictions. We need to be able to evaluate each and every step. We would need to know how the algorithm goes about building a model. Based on the data provided, how does it identify the relationships inherent in the data? How does it assign weights to the individual data points? Why does it not take into account all of the data provided, and instead relies on certain parameters alone?

Once a model has been built by an algorithm, we would need to know how the model is making decisions, based on the data provided as inputs. In some cases, inputs that the model has never seen during training.

Consider a problem, which has ten different parameters that need to be considered to arrive at a certain decision point, with all the parameters being required to make a prediction. Now, imagine a human was to make a decision using these ten parameters. Invariably, they would need to construct a matrix of sorts to aid decision-making or intuitively understand the relationship between all the parameters. When a machine is making these predictions, we would need to be able to explain how the model is making these decisions, given that it has no ready reckoner available to it like humans.

How to evaluate interpretability? At this point, there is no real consensus reached on how to evaluate interpretability or how to measure it or if there are formal approaches to attempt evaluation of interpretability.

Once again, we turn to Doshi-Velez and Kim, who, in their aforementioned 2017 paper, proposed three major levels when evaluating interpretability:

  • Application-level evaluation (real task): Put the explanation into the product and let the end user test it. For example, on an application level, radiologists would test fracture detection software (which includes a machine learning component to suggest where fractures might be in an x-ray image) directly in order to evaluate the model. This requires a good experimental setup and an idea of how to assess the quality. A good baseline for this is always how good a human would be at explaining the same decision.
  • Human-level evaluation (simple task) is a simplified application level evaluation. The difference is that these experiments are not conducted with the domain experts, but with lay humans. This makes experiments less expensive (especially when the domain experts are radiologists and it is easier to find more humans. An example would be to show a user different explanations and the human would choose the best.
  • Function-level evaluation (proxy task) does not require any humans. This works best when the class of models used was already evaluated by someone else in a human-level evaluation. For example, it might be known that the end users understand decision trees. In this case, a proxy for explanation quality might be the depth of the tree. Shorter trees would get a better explainability rating. It would make sense to add the constraint that the predictive performance of the tree remains good and does not drop too much compared to a larger tree.

What is explainability? This involves providing verbal explanations of how machine learning models work. Consider the case of explaining the results of a machine learning model to regulators, who will require explanations on how the models work, what goes into these models, and what the results will mean, especially when these models are used to determine outcomes in financial, healthcare and insurance domains.

Another way of looking at explainability is to clarify points of understanding in very clear terms to satisfy customers and regulators queries.

Who are getting involved to enforce interpretability? There is a new focus on the interpretability of machine learning models. Governments are getting involved, as we observe in the case of the European Union’s GDPR regulation – the regulation restricts automated individual decision-making based on user-level predictors, which could ‘significantly affect’ users. The law also creates a ‘right to explanation,’ whereby a user can ask for an explanation of an algorithmic decision that was made about them.

Regulatory bodies are not far behind. Over the recent years, we have seen an expansion of regulations, as seen in the “Federal Reserve Board’s SR 11-7”, the “Targeted Review of Internal Models”, sanction screening, “Anti-Money Laundering regulations”, “Know Your Customer”, various anti-fraud mandates, and the Bank Secrecy Act , among others . In cases such as anti-money laundering and KYC, machine learning models do play an important assistive role, but the same regulations are also pushing organizations to develop more interpretable machine learning models.

What is being done to assist the areas of interpretability? There are current models that could potentially be used to bring in interpretability, such as reinforcement learning. As observed by Zachary Lipton, in “The Mythos of Model Interpretability, “Reinforcement learners can address some (but not all) of the objectives of interpretability research by directly modeling interaction between models and environments. However, like supervised learning, reinforcement learning relies on a well-defined scalar objective. For problems like fairness, where we struggle to verbalize precise definitions of success, a shift of [the] machine learning paradigm is unlikely to eliminate the problems we face.

In machine learning technology, various other means of interpretability have also emerged with techniques like ‘surrogate modelling’ and visually plotting relationships between specific variables and the model’s output.

In conclusion, machine learning technology has unleashed a lot of power for the organizations utilizing them. It has empowered them and provided competitive advantages. But if the organizations fail to focus on the need for interpretability and explainability, among other factors, they will open themselves to regulatory penalties, or even worse, less accurate and unexplainable outcomes.