Open
Description
Describe the workflow you want to enable
I like the inspect module very much. Sometimes, interpretation of a model is more natural on a different scale (e.g. log) than on the scale of the predictions.
Here some examples:
- We fit a GLM. It is often more natural to inspect the model on the linear scale of its link function, not on the scale of the response variable.
- We fit a Poisson regression in XGBoost/LGB. These models are fitted on log scale, so it might be worth inspecting them on the log scale, even if it is only to compare with a benchmark GLM model.
Describe your proposed solution
Add an argument transformation
to all explainers (partial dependence, permutation importance). By default, it is None (or the identity). The user can pass e.g. np.log
to allow evaluation on the log scale.
-
Partial dependence plot: Here, it suffices to transform the predictions before averaging them.
-
Permutation importance: Here, both the response and the predictions need to be transformed. The scorer must be in line with the transformation and provided by the user.
Describe alternatives you've considered, if relevant
An alternative would be to change the prediction function of the Classifier/Regressor.