Get startedGet started for free

Custom scorers in pipelines

You are proud of the improvement in your code quality, but just remembered that previously you had to use a custom scoring metric in order to account for the fact that false positives are costlier to your startup than false negatives. You hence want to equip your pipeline with scorers other than accuracy, including roc_auc_score(), f1_score(), and you own custom scoring function. The pipeline from the previous lesson is available as pipe, as is the parameter grid as params and the training data as X_train, y_train. You also have confusion_matrix() for the purpose of writing your own metric.

This exercise is part of the course

Designing Machine Learning Workflows in Python

View Course

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

# Create a custom scorer
scorer = ____(roc_auc_score)

# Initialize the CV object
gs = GridSearchCV(pipe, param_grid=params, scoring=____)

# Fit it to the data and print the winning combination
print(gs.____(X_train, y_train).____)
Edit and Run Code