{"id":167160,"date":"2026-02-18T11:33:21","date_gmt":"2026-02-18T10:33:21","guid":{"rendered":"https:\/\/liora.io\/en\/?p=167160"},"modified":"2026-02-18T11:33:22","modified_gmt":"2026-02-18T10:33:22","slug":"shap-what-is-it","status":"publish","type":"post","link":"https:\/\/liora.io\/en\/shap-what-is-it","title":{"rendered":"SHapley Additive exPlanations or SHAP : What is it ?"},"content":{"rendered":"<p><strong>SHapley Additive exPlanations, more commonly known as SHAP, is used to explain the output of Machine Learning models. It is based on Shapley values, which use game theory to assign credit for a model&#8217;s prediction to each feature or feature value.<\/strong><\/p>\n<!-- \/wp:post-content -->\n\n<!-- wp:paragraph -->\n<p>The way SHAP works is to <b>decompose the output <\/b>of a model by the sums of the impact of each feature. SHAP calculates a value that represents the contribution of each feature to the <b>model outcome<\/b>. These values can be used to understand the<b> importance of each feature<\/b> and to explain the result of the model to a human. This is especially valuable for agencies and teams that report to their clients or managers.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>SHAP has <b>several interesting properties<\/b>, such as its neutrality towards models. This allows it to be used on any learning model, to <b>produce consistent explanations<\/b>, and to handle complex model behaviors (when features interact with each other, for example).<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:heading -->\n<h2 id=\"h-what-shap-is-it-used-for\" class=\"wp-block-heading\">WHat SHAP is it used for ?<\/h2>\n<!-- \/wp:heading -->\n\n<!-- wp:paragraph -->\n<p>SHAP has many uses for data science professionals. First, it helps <b>explain the predictions<\/b> of <a href=\"https:\/\/liora.io\/en\/machine-learning-what-is-it-and-why-does-it-change-the-world\">Machine Learning models<\/a> in a way that humans can understand. By assigning a value to each input feature, it shows how and to what extent each feature contributed to the <b>final prediction result<\/b>. This way, the team can understand how the model made its decision and can identify the most important features.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>As explained earlier, this model is called agnostic (neutral). It can be <b>used with any Machine Learning model<\/b>. So you don&#8217;t have to worry about the structure of the model to understand the prediction result with SHAP. Moreover, this model is consistent. You can therefore trust the explanations produced, regardless of the model studied.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>Finally, SHAP is particularly useful for handling complex behaviors. You can use this technique to understand how different features affect the model prediction together.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:image {\"width\":\"1000px\",\"height\":\"auto\",\"sizeSlug\":\"large\"} -->\n\n<!-- \/wp:image -->\n\n<!-- wp:buttons {\"className\":\"is-layout-flex wp-block-buttons-is-layout-flex is-content-justification-center\",\"layout\":{\"type\":\"flex\",\"justifyContent\":\"center\"}} -->\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex is-content-justification-center\"><!-- wp:button -->\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/liora.io\/en\/courses\/data-ai\/machine-learning-engineer\">Discover our training of Machine Learning Engineer<\/a><\/div>\n<!-- \/wp:button --><\/div>\n<!-- \/wp:buttons -->\n\n<!-- wp:heading -->\n<h2 id=\"h-how-to-use-shap-to-explain-predictions\" class=\"wp-block-heading\">How to use SHAP to explain predictions?<\/h2>\n<!-- \/wp:heading -->\n\n<!-- wp:paragraph -->\n<p>Here is how to use SHAP to explain the predictions of a Machine Learning model:<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:list {\"ordered\":true} -->\n<ol class=\"wp-block-list\"><!-- wp:list-item -->\n<li>Install the SHAP package using &#8216;pip install shap&#8217;.<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li>Import the SHAP package and other necessary libraries, such as <b>Numpy<\/b> and <b>Matplotlib<\/b>.<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li>Load your <b>Machine Learning model <\/b>and prepare the input data you want to explain.<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li>Create a SHAP object using the &#8216;<b>shap.TreeExplainer<\/b>&#8216; function for tree-based models, or &#8216;<b>shap.KernelExplainer<\/b>&#8216; for other model types.<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li>Call the &#8216;<b>explain<\/b>&#8216; method of the SHAP object by passing it the input data you want to explain. This method will <b>return a matrix of SHAP values<\/b> that represents the impact of each feature on the model prediction.<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li>Use the SHAP values to visualize and interpret the results. For example, you can use the &#8216;<b>shap.summary_plot<\/b>&#8216; function to generate a summary graph that shows the relative importance of each feature. You can also use the &#8216;<b>shap.dependence_plot<\/b>&#8216; function to visualize how a particular feature influences the prediction of the model as a function of the value of that feature.<\/li>\n<!-- \/wp:list-item --><\/ol>\n<!-- \/wp:list -->\n\n<!-- wp:paragraph -->\n<p>This technique is simple and very efficient.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:heading -->\n<h3 id=\"h-example-of-shap-use\" class=\"wp-block-heading\">Example of SHAP use<\/h3>\n<!-- \/wp:heading -->\n\n<!-- wp:paragraph -->\n<p>You will find below an example of SHAP use, based on decision trees. To better understand the example, let&#8217;s talk about <b>TreeExplainer<\/b>.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>TreeExplainer uses an <b>approach based on the approximation of the set of trees<\/b> to calculate the SHAP values of each characteristic. It is therefore useful for explaining predictions of Machine Learning models<b> using decision trees<\/b>. It is also useful for explaining regression and classification models, Random Forests, and Gradient Boosting Machines.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:image {\"align\":\"center\",\"style\":{\"spacing\":{\"margin\":{\"top\":\"var:preset|spacing|columns\",\"bottom\":\"var:preset|spacing|columns\"}}}} -->\n<figure class=\"wp-block-image aligncenter\" style=\"margin-top:var(--wp--preset--spacing--columns);margin-bottom:var(--wp--preset--spacing--columns)\"><img decoding=\"async\" src=\"https:\/\/liora.io\/app\/uploads\/sites\/9\/2023\/03\/image1.png\" alt=\"Random forest\"><\/figure>\n<!-- \/wp:image -->\n\n<!-- wp:paragraph -->\n<p><\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>Here is a simple example of using SHAP with a regression model based on decision trees:<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:code {\"style\":{\"spacing\":{\"margin\":{\"top\":\"var:preset|spacing|columns\",\"bottom\":\"var:preset|spacing|columns\"}}},\"fontSize\":\"xsmall\"} -->\n<pre class=\"wp-block-code has-xsmall-font-size\" style=\"margin-top:var(--wp--preset--spacing--columns);margin-bottom:var(--wp--preset--spacing--columns)\"><code>\n\t\t\t\t\timport shap\nimport numpy as np\nimport matplotlib.pyplot as plt\n# Load your model\nmodel = load_model()\n# Prepare entry-data you want to explain \nX = prepare_data()\n# Create a SHAP object by using TreeExplainer \nexplainer = shap.TreeExplainer(model)\n# Call the explain method by using entry data\nshap_values = explainer.explain(X)\n# Display a summary graph of the relative importance of each characteristic\nshap.summary_plot(shap_values, X)\n# Display the dependency diagram of the characteristic \"age\u201d\nshap.dependence_plot(\u00ab age \u00bb, shap_values, X)\n\t\t\t\t<\/code><\/pre>\n<!-- \/wp:code -->\n\n<!-- wp:paragraph -->\n<p>This example calculates the <b>SHAP values for each X element<\/b>. It then displays a summary graph of the relative importance of each feature and a dependency plot for the age category.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:heading -->\n<h2 id=\"h-in-conclusion\" class=\"wp-block-heading\">In conclusion<\/h2>\n<!-- \/wp:heading -->\n\n<!-- wp:paragraph -->\n<p>SHAP is thus a technique that allows to explain the <b>predictions of Machine Learning models<\/b> in a versatile and powerful way. This method is agnostic, consistent, and can <b>handle complex model behavior<\/b>. SHAP is particularly useful for understanding how a model works, identifying important features, and explaining the <b>result of predictions<\/b> to others on your team or to your customers.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>Now that you&#8217;ve discovered SHAP, you may want to master it. To do so, we invite you to learn more about <a href=\"https:\/\/liora.io\/en\/courses\/\">Liora training courses<\/a> that incorporate Machine Learning into the curriculum.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:buttons {\"className\":\"is-layout-flex wp-block-buttons-is-layout-flex is-content-justification-center\",\"layout\":{\"type\":\"flex\",\"justifyContent\":\"center\"}} -->\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex is-content-justification-center\"><!-- wp:button -->\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link wp-element-button\" href=\"\/en\/courses\/data-ai\/machine-learning-engineer\">Start a Machine Learning Engineer training course<\/a><\/div>\n<!-- \/wp:button --><\/div>\n<!-- \/wp:buttons -->\n\n<!-- wp:html -->\n<script type=\"application\/ld+json\">\n{\n  \"@context\": \"https:\/\/schema.org\",\n  \"@type\": \"FAQPage\",\n  \"mainEntity\": [\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What is SHAP used for?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"SHapley Additive exPlanations, more commonly known as SHAP, is used to explain the output of Machine Learning models by assigning a value to each feature that represents its contribution to the model\u2019s prediction.\u00a0([liora.io](https:\/\/liora.io\/en\/shap-what-is-it))\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"How does SHAP work?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"SHAP works by decomposing the output of a model into the sum of the impacts of each feature using Shapley values from game theory, reflecting how much each feature contributed to the prediction.\u00a0([liora.io](https:\/\/liora.io\/en\/shap-what-is-it))\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What are the properties of SHAP?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"SHAP has interesting properties such as model agnosticism and consistency, allowing it to be used with any learning model to produce reliable explanations even when features interact.\u00a0([liora.io](https:\/\/liora.io\/en\/shap-what-is-it))\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"How to use SHAP to explain predictions?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"To use SHAP, you install the SHAP package, create a SHAP explainer for your model type, compute SHAP values for input data, and visualize these values to understand feature impacts.\u00a0([liora.io](https:\/\/liora.io\/en\/shap-what-is-it))\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"Why is SHAP particularly valuable?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"SHAP is valuable because it helps explain complex model behavior by showing how individual features affect predictions, aiding interpretation and communication of results.\u00a0([liora.io](https:\/\/liora.io\/en\/shap-what-is-it))\"\n      }\n    }\n  ]\n}\n<\/script>\n\n<!-- \/wp:html -->","protected":false},"excerpt":{"rendered":"<p>SHapley Additive exPlanations, more commonly known as SHAP, is used to explain the output of Machine Learning models. It is based on Shapley values, which use game theory to assign credit for a model&#8217;s prediction to each feature or feature value. The way SHAP works is to decompose the output of a model by the [&hellip;]<\/p>\n","protected":false},"author":82,"featured_media":207181,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"editor_notices":[],"footnotes":""},"categories":[2433],"class_list":["post-167160","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-data-ai"],"acf":[],"_links":{"self":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts\/167160","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/users\/82"}],"replies":[{"embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/comments?post=167160"}],"version-history":[{"count":5,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts\/167160\/revisions"}],"predecessor-version":[{"id":207182,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts\/167160\/revisions\/207182"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/media\/207181"}],"wp:attachment":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/media?parent=167160"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/categories?post=167160"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}