CommencerCommencer gratuitement

Custom scorers in pipelines

You are proud of the improvement in your code quality, but just remembered that previously you had to use a custom scoring metric in order to account for the fact that false positives are costlier to your startup than false negatives. You hence want to equip your pipeline with scorers other than accuracy, including roc_auc_score(), f1_score(), and you own custom scoring function. The pipeline from the previous lesson is available as pipe, as is the parameter grid as params and the training data as X_train, y_train. You also have confusion_matrix() for the purpose of writing your own metric.

Cet exercice fait partie du cours

Designing Machine Learning Workflows in Python

Afficher le cours

Exercice interactif pratique

Essayez cet exercice en complétant cet exemple de code.

# Create a custom scorer
scorer = ____(roc_auc_score)

# Initialize the CV object
gs = GridSearchCV(pipe, param_grid=params, scoring=____)

# Fit it to the data and print the winning combination
print(gs.____(X_train, y_train).____)
Modifier et exécuter le code