Unrestricted Permutation forces Extrapolation: Variable Importance Requires at least One More Model, or There Is No Free Variable Importance

Giles Hooker, Lucas Mentch, Siyu Zhou

Statistics and Computing
October 29, 2021

Abstract: This paper reviews and advocates against the use of permute-and-predict (PaP) methods for interpreting black box functions. Methods such as the variable importance measures proposed for random forests, partial dependence plots, and individual conditional expectation plots remain popular because they are both model-agnostic and depend only on the pre-trained model output, making them computationally efficient and widely available in software. However, numerous studies have found that these tools can produce diagnostics that are highly misleading, particularly when there is strong dependence among features. The purpose of our work here is to (i) review this growing body of literature, (ii) provide further demonstrations of these drawbacks along with a detailed explanation as to why they occur, and (iii) advocate for alternative measures that involve additional modeling. In particular, we describe how breaking dependencies between features in hold-out data places undue emphasis on sparse regions of the feature space by forcing the original model to extrapolate to regions where there is little to no data. We explore these effects across various model setups and find support for previous claims in the literature that PaP metrics can vastly over-emphasize correlated features in both variable importance measures and partial dependence plots. As an alternative, we discuss and recommend more direct approaches that involve measuring the change in model performance after muting the effects of the features under investigation. 

Lay summary: 

Many machine learning packages produce measures of feature importance. These can be obtained a number of ways, but a common practice is to replace the feature of interest with a new value and measure how this degrades predictive performance. The easiest new value to choose is just the value from a randomly chosen example.  

This paper shows that this is dangerous; it produces examples with feature pairs that are never observed in the data, forcing your prediction tool to extrapolate.  When we examine the effect of this, we see that it can considerably distort the ranking of the importance of the features. Instead, we need to do more work, either re-training our model, or simulating a model for the conditional distribution of a feature given the others, to provide valid feature importance. 



Featured Fellows

Giles Hooker

Statistics, UC Berkeley
Faculty Affiliate