Get startedGet started for free

Text and image explainability with LIME

1. Text and image explainability with LIME

LIME can also explain the predictions made on text and images.

2. Text-based models

Text-based models are designed to process and interpret written language. These models take text as input and make predictions about it. A common task for such models is sentiment analysis, where the model reads a review or comment and decides whether the sentiment expressed is positive or negative. However, the internal workings of these models can be quite opaque, often characterized as "black-box" due to their unexplainable nature. To address this, we use the LimeTextExplainer, which is tailored for handling text. This explanation process reveals how each word of the input text impacts the model's prediction, helping us build more transparent and trustworthy AI systems. No knowledge of NLP or text preprocessing tools is needed; our exercises include all necessary functions and models so you can focus on interpreting LIME’s explanations

3. LIME text explainer

Suppose we have a model that predicts the sentiment of a review, and we want to explain its predictions for a specific text_instance. We start by importing LimeTextExplainer from the lime.lime_text module and defining the text_instance we want to explain. Next, we create an instance of LimeTextExplainer and call the explain_instance method, passing in our text_instance and the model's prediction function, model_predict. This function is essential for models that utilize advanced frameworks beyond sklearn, including deep and large models. This prediction function will be manually defined to return class probabilities. In the exercises, this function will be provided, so no need to worry about its details. Using exp.as_pyplot_figure(), we can visualize the contributions of individual words to the prediction. Notably, 'great' and 'poor' are key determinants of sentiment.

4. Image-based models

Similarly, image-based models, which are highly complex, are designed to interpret visual data. They take an image as input and make predictions about it, such as in food classification where the model predicts the type of food in an image. To explain the predictions of such models, we use LimeImageExplainer, highlighting which parts of the image influence the model's prediction the most. Remember, our focus is on interpreting predictions. The exercises provide all configurations, so no familiarity with computer vision or image processing is needed.

5. LIME image explainer

Suppose we have a model that classifies food images, and we want to explain its prediction for this ice cream image. We import LimeImageExplainer from lime.lime_image and create an instance of it. We use the explain_instance method for explanation. We pass in the input image, model_predict, a manually defined function similar to text models, and the num_samples controlling how many perturbed samples are generated for the explanation. Finally, explanation.get_image_and_mask highlights the regions of the image that most influenced the model's prediction. We focus on the top predicted label using explanation.top_labels[0], and with hide_rest=True, we hide less relevant parts.

6. LIME image explainer

Using plt.imshow(temp), we observe that the least important parts are obscured with black color, while the important ones remain visible. Notably, the ice cream itself stays prominent in the prediction, confirming that the model is functioning appropriately.

7. Let's practice!

Let's practice!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.