ComeçarComece de graça

Total scoring

Remember that precision and recall might be weighted differently and therefore the F-beta score is an important evaluation metric. Additionally, the ROC of the AUC curve is an important complementary metric to precision and recall since you saw prior how it may be the case that a model might have a high AUC but low precision. In this exercise, you will calculate the full set of evaluation metrics for each classifier.

A print_estimator_name() function is provided that will provide the name for each classifier. X_train, y_train, X_test, y_test are available in your workspace, and the features have already been standardized. pandas as pd and sklearn are also available in your workspace.

Este exercício faz parte do curso

Predicting CTR with Machine Learning in Python

Ver curso

Instruções do exercício

  • Define a MLP classifier with one hidden layer of 10 hidden units and 50 max iterations.
  • Train and predict for each classifier.
  • Use implementations from sklearn to get the precision, recall, F-beta score, and the AUC of the ROC score.

Exercício interativo prático

Experimente este exercício completando este código de exemplo.

# Create classifiers
clfs = [LogisticRegression(), DecisionTreeClassifier(), RandomForestClassifier(), 
        ____(____ = (10, ), ____ = 50)]

# Produce all evaluation metrics for each classifier
for clf in clfs:
  print("Evaluating classifier: %s" %(print_estimator_name(clf)))
  y_score = clf.fit(X_train, y_train).____(X_test)
  y_pred = clf.fit(X_train, y_train).____(X_test)
  prec = ____(y_test, y_pred, average = 'weighted')
  recall = ____(y_test, y_pred, average = 'weighted')
  fbeta = ____(y_test, y_pred, beta = 0.5, average = 'weighted')
  roc_auc = ____(y_test, y_score[:, 1])
  print("Precision: %s: Recall: %s, F-beta score: %s, AUC of ROC curve: %s" 
        %(prec, recall, fbeta, roc_auc))
Editar e executar o código