Model-agnostic explanations
1. Model-agnostic explanations
Excellent work on the exercises so far! We will now dive into the explainability of black box models.2. Black box models
Imagine a black box model as a magical device that can predict things like whether a photo contains a cat or a dog, but we have no idea how it makes those predictions. It’s like a magician’s hat. We know what goes in, and we see what comes out, but the trick happening inside is a mystery. This is caused by the amount of internal parameters, which can become huge in complex models.3. Global and local explanations
To mitigate this lack of transparency in black box models, we can apply two techniques to estimate what happened in the magician's hat. This will give us an insight into how the model came to its decision. We will look into SHAP, short for SHapley Additive exPlanations, and LIME, which is short for Local Interpretable Model-agnostic Explanations.4. SHAP (SHapley Additive exPlanations)
Now, let’s talk about SHAP. SHAP is like a detective, it helps us uncover the secrets hidden within the black box. It does this by looking at each piece of evidence, in this case features, and figuring out how much each one contributes to the final decision. Imagine you’re baking a cake. SHAP utilizes cooperative principles from game theory to determine the impact of each ingredient in the cake. It ensures that each ingredient gets its fair share of credit in determining the taste of the cake.5. Shapley values
In cooperative game theory, there’s a concept called Shapley values. These values help distribute the contribution of each feature fairly. SHAP applies this concept to machine learning models, ensuring that each feature’s impact is accurately assessed. For example, when trying to predict customer churn, SHAP helps us understand which features matter the most for the model to say, “This customer will churn”. We can see the average positive and negative impact of each feature on the model's output. It’s like ensuring that each ingredient in your cake gets the right credit for its role in the cake’s deliciousness.6. SHAP image recognition
Showing how SHAP works in an image recognition example, we can see where the model bases its classification on. It shows that for the Dowitcher, the beak is a feature of the bird to classify it as such. Similarly, we can see for the Meerkat that the head played an important role.7. LIME (Local Interpretable Model-agnostic Explanations)
Now, let’s meet LIME. LIME is like a curious scientist who wants to understand the magician’s trick by changing things slightly and observing the outcome. It tweaks the input data to see how the model reacts. Think of it as having a scientist in a laboratory setting. LIME helps us create a simpler, understandable recipe for individual predictions. In image recognition, LIME would ask, "What if we change some pixels in this image? Would it still be classified as a cat?"8. Building a self-driving car
To compare SHAP and LIME, let’s imagine we’re building a self-driving car. We would use SHAP to global understanding, finding out which car features (like steering, brakes, and gas pedal) are most important for the car’s overall safety and performance across your entire dataset. It’s like deciding which car parts to focus on for an entire fleet of cars. On the other hand, LIME helps us explain specific moments when the car made decisions, like turning left at an intersection. It’s like conducting experiments in a lab to understand why our car made a specific turn in a particular situation.9. Let's practice!
Let's dive into some exercises to get a deeper understanding of how to apply explainable techniques to black box models.Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.