1. Model risks
In this video, we will take a closer look at AI model development, focusing on common risks and the importance of risk assessment throughout the model lifecycle.
2. Monitoring risks over the lifecycle
Keeping an eye on risks during model development reduces potential harms of deploying a flawed system. However, it is also key to monitor AI model risks after deployment. AI models can change (commonly called to drift) over time due to changes in the real world or the data used to train them, leading to biased or inaccurate outputs if not monitored and addressed.
3. Seven common model risks
The common model risks we will cover are lack of model transparency and interpretability, bias, hallucination, model drift, overfitting, underfitting, and data leakage. Let’s look into each one.
4. Model transparency and interpretability
The first risks, lack of transparency and interpretability of models, are about the ability to understand AI models. A model that is not transparent means that the inner workings of a model are not accessible and clear to those who use it. Lack of interpretability is the risk that people will not be able to understand why a model produces certain outputs based on specific inputs. These two elements are crucial for trust in AI. This is especially true in sensitive areas like healthcare or finance, where understanding a model's decision-making process is as important as the decision itself.
5. Bias
The second risk, bias, was already mentioned previously. Unfair preferences or prejudices in models can arise at various stages. Common causes are biased training data, choice of algorithms and how they are configured, unrepresentative or flawed data, or human assumptions made during development. Detecting and mitigating bias is vital to developing fair and ethical AI systems.
6. Hallucination
The third risk, hallucination, was also previously introduced. These errors can occur for many reasons, including insufficient training data, incorrect assumptions made by the model, and biases in the data.
7. Model drift
The fourth risk is model drift. This is when previously accurate models begin to lose their relevance and accuracy. The model gets confused because the data it sees now is different or doesn't follow the same patterns it learned before. For instance when spam filters become less accurate because of new deceptive tactics.
8. Effective and accurate learning
The final risks, overfitting, underfitting, and data leakage all relate to how effectively and accurately a model learns from data during development. Overfitting occurs when a model learns the details and noise in the training data so much that it negatively impacts future performance. Underfitting happens when a model is too simple to learn the underlying patterns of the data. Leakage occurs when outside data is mistakenly included in the model's training dataset. Each impacts the model's ability to make accurate predictions.
9. Explainable AI (XAI)
If not controlled over the lifecycle of AI model development and deployment, model risks can significantly impact the success and reliability of AI applications. There are techniques such Explainable AI, or XAI, that can significantly help manage them. XAI is a set of methods and techniques that aim to make AI models more understandable and transparent to humans. It provides clear explanations for the reasoning behind a model's output. This not only builds trust among users but also aids in identifying and correcting errors or biases within the model.
10. More robust, fair, and secure AI systems
By being aware of common risks we can build more robust, fair, and secure AI systems. Assessing risks and implementing measures that we will present in further videos helps ensure that AI serves as a beneficial tool for everyone.
11. Let's practice!
Now that you've gained a deeper understanding of the risks associated with AI model development and deployment, let's practice!