Evaluating faithfulness with LIME
You are provided with a LIME explanation for a sample X_instance from the income dataset. Since gender is the most important predictor, you need to change its value and compute faithfulness to determine how well the explanation aligns with the model's behavior for that instance.
This exercise is part of the course
Explainable AI in Python
Exercise instructions
- Change the gender value to 0 in
X_instance. - Generate a
new_predictionprobability. - Estimate the
faithfulnessof LIME's explanation.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
original_prediction = model.predict_proba(X_instance)[0, 1]
print(f"Original prediction: {original_prediction}")
# Change the gender value to 0
____
# Generate the new prediction
new_prediction = ____
print(f"Prediction after perturbing 'gender': {new_prediction}")
# Estimate faithfulness
faithfulness_score = ____
print(f"Local Faithfulness Score: {faithfulness_score}")