Spotting bias in machine learning
In the video, we learned about an AI-enabled recruiting software that preferred men because it learned from historical data when more men were hired. When we have models that affect peoples' lives, we need to carefully evaluate them for any discriminatory behavior that can be learned from historical data.
In this exercise, you have a model that attempts to predict whether someone will default on their loan. You can break down the resulting predictions, by different features like demographics and employment status. Play around with these features and see if you can find anything suspicious about who is predicted to default and who isn't.
Which feature(s) should be investigated more for potential bias before deploying the model?
Questo esercizio fa parte del corso
Understanding Machine Learning
Esercizio pratico interattivo
Passa dalla teoria alla pratica con uno dei nostri esercizi interattivi
Inizia esercizio