Session Ready
Exercise

Total scoring

Remember that precision and recall might be weighted differently and therefore the F-beta score is an important evaluation metric. Additionally, the ROC of the AUC curve is an important complementary metric to precision and recall since you saw prior how it may be the case that a model might have a high AUC but low precision. In this exercise, you will calculate the full set of evaluation metrics for each classifier.

A print_estimator_name() function is provided that will provide the name for each classifier. X_train, y_train, X_test, y_test are available in your workspace, and the features have already been standardized. pandas as pd and sklearn are also available in your workspace.

Instructions
100 XP
  • Define a MLP classifier with one hidden layer of 10 hidden units and 50 max iterations.
  • Train and predict for each classifier.
  • Use implementations from sklearn to get the precision, recall, F-beta score, and the AUC of the ROC score.