Get startedGet started for free

Spotting bias in machine learning

In the video, we learned about an AI-enabled recruiting software that preferred men because it learned from historical data when more men were hired. When we have models that affect peoples' lives, we need to carefully evaluate them for any discriminatory behavior that can be learned from historical data.

In this exercise, you have a model that attempts to predict whether someone will default on their loan. You can break down the resulting predictions, by different features like demographics and employment status. Play around with these features and see if you can find anything suspicious about who is predicted to default and who isn't.

Which feature(s) should be investigated more for potential bias before deploying the model?

This exercise is part of the course

Understanding Machine Learning

View Course

Hands-on interactive exercise

Turn theory into action with one of our interactive exercises

Start Exercise