SHAP Values for Explainable AI

Shapely Values — explained in simplest terms

Abhishek Maheshwarappa
6 min readAug 9, 2021

Introduction

With increase debate on accuracy and explainability, the SHAP (SHapley Additive exPlanations) provides the game-theoretic approach to explain the output of any ML model.

SHAP was introduced in the research paper A Unified Approach to Interpreting Model Predictions by Scott M. Lundberg and Su-In Lee in 2017.

If you hate theory want to play with the code here is Google Colab for you.

For others who are interested in how SHAP works read the entire story.

Example to make sense of Shapely Values

To understand the mathematics behind shapely value generation, will take an example of baking cookies and apply SHAP to explain the contribution of the people baking cookies.

‌Let us say David and Lisa are baking cookies individually. David bakes 10 cookies and Lisa bakes 20 cookies.

When David and Lisa are working together, they streamline the process and manage to bake 40 cookies together.

If we consider $1 as the price for each cookie, then when we can say that when Lisa and David bake individually, they bake 30 cookies(David makes 10 and Lisa makes 20), costing $30. But, when they bake together they get $40 (40 cookies together).

‌Let us now calculate their marginal contributions

‌Case 1

Since David bakes 10 cookies when working alone, Lisa’s marginal contribution to the coalition is 30 (=‌40–10).

‌Case 2

Since Lisa bakes 20 cookies when working alone, the marginal contribution of David to the coalition is 20 (= 40-20).

First Case,
David’s contribution to the coalition is 10 cookies and in the second case, David’s contribution to the coalition is 20 cookies.

According to the Shapely equation, to find the Shapely value of David, we need to average them

(10+20)/2 = 15.

This is the Shapely value of David

‌Second Case,
For Lisa, the contribution to the coalition is 30 cookies in the first case and her contribution to the coalition in the second case is 20 cookies. So the Shapely value for Lisa will be

(20+30)/2 = 25
This is the Shapely value of Lisa

To understand the technicality we need first shine the light on game theory then dive into SHAP next few sections are about game theory, if you are familiar with these you skip the next couple of paragraphs.

Game theory

‌Game theory is the process of modeling the strategic interaction between two or more people in a situation containing set rules and outcomes in which each person’s payoff is affected by the decision made by others. Game theory is widely used by economists, political scientists, the military, and many more people. It was initially developed by John von Neumann and Oskar Morgenstern in 1944 as a mathematical theory.

Image made by Author

Non Cooperative Game theory

Non-Cooperative game theory is a competitive social interaction where there will be some winners and some losers. It is where Nash equilibrium comes into play. This article doesn’t deal in detail with game theory. To read more about nash equilibrium refer to an article from Investopedia. The non-cooperative game theory is best understood with the example of the Prisoner’s Dilemma.

‌Cooperative Game theory

Cooperative Game theory is where every player has agreed to work together towards a common goal, this is where the SHAP values fall under. In-game theory, a coalition is what you call a group of players in a cooperative game.

‌What are the Shapely values?

A method of dividing up the gains or costs among players according to the value of their individual contributions.

‌It rests on three important pillars

‌1. Marginal contribution

The contribution of each player is determined by what is gained or lost by removing them from the game. This is called their marginal contributions.

‌2. Interchangeable players have an equal value

If two parties bring the same things to the coalition, they should have to contribute the same amount and should be rewarded for their contributions.

3. ‌Dummy player has zero values

If a member of the coalition contributes nothing, then they should receive nothing. But it might not be fair in all cases, let us take an example of this thing more clear:

Mathematically, shapely values are represented by

In a coalitional game, we have a set N of n players. We also have a function v that gives us a value(or payout), for any subset of the n players. In other words, if S is a subset of N, then v(S) gives the value of that subset. So, for a coalitional Game(N, v) we can use the above equation to calculate the payout for players i.e. the Shapley value.

SHAP provides both Global and Local explanations for any ML model. SHAP is a model agnostic technique explaining any variety of models. Even SHAP is data agnostic can be applied for tabular data, image data, or textual data.

The main problem with SHAP is computational cost, it is very computationally expensive for generating the Shapely values, and when the number of features increases it is difficult for not technical people to make sense of the shapely values.

For code implementation, Pima Indians Diabetes Data is used with Random forest model. Since there are numerous ways to use SHAP to explain the model in the implementation you will find a few of the most used ways, and the remaining will be dealt with in the following blogs as this topic is very huge for one blog post.

Visualization for SHAP values

1.Force Plot

The above plot is called a Force Plot which deals with a single row or data point and tries to explain what was the output for these inputs from the model. The above example shows that there is a 77% chance for a person having features to be diabetic

2.Summary Plot

SHAP values for each feature to identify how much impact each feature has on the model output for individuals in the validation dataset. Features are sorted by the sum of the SHAP value magnitudes across all samples

We can see Glucose be the most impacting feature which has the highest combined effect of the person being diabetic or not.

There are other visualizations that one can play and try to learn more from the notebook - Link here.

The next article will present some unique techniques required for applying SHAP on some of the non pandas data frames so stayed tuned..!!

If you like — clap, share it and follow me.

References

  1. A Unified Approach to Interpreting Model Predictions
  2. Documentation of SHAP
  3. https://github.com/slundberg/shap
  4. https://christophm.github.io/interpretable-ml-book/

--

--