Session Ready
Exercise

Reminder of performance metrics

Remember the credit dataset? With all the extra knowledge you now have about metrics, let's have another look at how good a random forest is on this dataset. You have already trained your classifier and obtained your confusion matrix on the test data. The test data and the results are available to you as tp, fp, fn and tn, for true positives, false positives, false negatives, and true negatives respectively. You also have the ground truth labels for the test data, y_test and the predicted labels, preds. The functions f1_score() and precision_score() have also been imported.

Instructions 1/3
undefined XP
  • 1

    Compute the F1 score for your classifier using the function f1_score().

    • 2

      Compute the precision for this classifier using the function precision_score().

    • 3

      Accuracy is the proportion of examples that were labelled correctly. Compute it without using accuracy_score().