Explainable AI with ICE ( Individual Conditional Expectation Plots )


Partial Dependence Plots have a serious problem of showing a line close to flat indicating no significant change in outcome value but in reality, the data can be of equal values with opposite signs. To address this issue, we have Individual Conditional Expectation (ICE) plots. They were introduced by researchers from The Wharton School in 2014 by building upon the work done by Friedman. They also intend to solve the problem by showing interactions between the variables which is not possible to a great extent in PDPs.


ICE plots are built on top of PDPs, they disaggregate the averaged data thus providing a chance to inspect the effect of the predictor variable at each value level while keeping the values of other predictor variables constant. A basic ICE plot shows how varying the feature value for an instance affects the predictive outcome by of course keeping other feature values constant. It can be cumbersome at times to analyze all the data points at once, but it also provides us a way to plot only one single point.

Centered ICE plot

A general ICE plot may be visually challenging to understand at times and also to differentiate between 2 curves originating from different points. To solve this problem, the curves can be centered such that they originate from a single point. In doing so, the difference between the curves can be easily spotted. The plotted plot is called the “c-ICE” plot It is observed that choosing the Centralpoint as the least value of outcome variables gives the best results.

Derivative ICE plot

These plots are useful to investigate the presence of any interactions and the direction of the change in the predictive variable with respect to a feature by estimating the partial derivative of the curve. These are called the “d-ICE” plots. The derivative plots would display homogeneous curves showing only the difference in the level of prediction, heterogeneous curves would be present in case of any interactions of this feature with other features. The derivative plots can be given as:



  • They are intuitive and easy to implement
  • They can help in decoding the interactions between variables
  • They can help analyze the model at a granularity of each instance


  • It can get cumbersome to derive insights from ICE plots at times
  • They can only display the effects of only one feature at a time


We use the Pycebox library and generate ICE plots


ICE plot

ICE plot with PDP line

Centered ICE plot


  1. Molnar, Christoph. “Interpretable machine learning. A Guide for Making Black Box Models Explainable”, 2019. https://christophm.github.io/interpretable-ml-book/. (the images were taken from here)
  2. The Elements of Statistical Learning: Trevor Hastie, Robert Tibshirani and Jerome Friedman



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store