Get startedGet started for free

XAI by design

1. XAI by design

Excellent work on the course thus far. Now that we've seen how XAI allows AI systems to be more tailored to the end-user, we will look into how we can further incorporate XAI by design.

2. Scenario: the Smell Nice Shop

In the context of the Smell Nice Shop, where they are tasked with determining the necessary shampoo stock, we can explore the application of explainability by design. Initially, we outline the existing workflow. Weekly, the staff estimates the needed stock based on past sales and intuition. Despite these efforts, there are occasions when their predictions miss the mark.

3. Scenario: the Smell Nice Shop

During our pilot at the Smell Nice Shop, we deployed an AI system that markedly enhanced the accuracy of stock level predictions. Together with the shop, we decided to implement this AI tool to forecast the necessary inventory. Although the AI system frequently delivers accurate predictions, there are moments when its estimates are significantly off. In these instances, it's vital for the staff to analyze and understand why such discrepancies occurred in the AI's forecasts.

4. Scenario: the Smell Nice Shop

If we use a very complex model, such as a deep learning model, it can detect intricate patterns between all the different variables. This might make the model more precise, but it's hard to pinpoint for the staff working at the shop why the AI system has come to its decision.

5. Scenario: the Smell Nice Shop

If we use a simpler model, for instance a decision tree model, the explanation will be more tangible, even though it might be less precise. This is why, during the development of the AI system, we have to take into account how we can interpret the model. Often, the accuracy is less important for the end user. It can be way more valuable if the staff of the shop understand the AI model's decision making.

6. Integrating explainability by design

From the very beginning of designing the AI system, we have to look at how the shop staff will interact with the system. That's what we call explainability by design. The staff of the Smell Nice Shop will be way more likely to interact with the first system on the left, where we used a simple AI model in combination with XAI techniques. The system on the right could be perfect in predicting the amount of stock required, but it might be too complex for the staff to use. Once we have defined the end user's goals and requirements, we can start selecting the right model for the task at hand.

7. Documentation during development

During the development of AI systems, there should be clear documentation of the data sources, model choices, and the rationale behind these choices. This should make sure that we include all relevant data that could influence the required stock for the shop. This includes looking into the required explainability techniques, such as SHAP. Through using SHAP, we can include feature contributions, as in the shown example.

8. Continuous monitoring and improvement

Because we have implemented the right explainability tools, and the staff at the shop understand the AI decision-making, we can gather feedback on the initial prototype. It could be that the staff sees why the prediction is wrong, for instance because we miss the campaigns of shampoo of the last weeks. This continuous feedback loop helps in continuously monitoring and improving the AI model.

9. Determine risks before deployment

As you deploy the first version of the AI system, we want to make sure to look at ethical considerations. What could be potential risks if the AI system is wrong? What if it predicts a very high number of required shampoo bottles and the staff at the shop do not notice? We want to make sure that we mitigate these potential risks.

10. Let's practice!

Let's look at how to apply XAI by design with some exercises!