F-beta score
The F-beta score is a weighted harmonic mean between precision and recall, and is used to weight precision and recall differently. It is likely that one would care more about weighting precision over recall, which can be done with a lower beta
between 0 and 1. In this exercise, you will calculate the precision and recall of an MLP classifier along with the F-beta score using a beta = 0.5
.
X_train
, y_train
, X_test
, y_test
are available in your workspace, and the features have already been standardized. pandas
as pd
and sklearn
are also available in your workspace. fbeta_score()
from sklearn.metrics
is available as well.
This is a part of the course
“Predicting CTR with Machine Learning in Python”
Exercise instructions
- Split the data into training and testing data.
- Define a MLP classifier, train using
.fit()
, and predict using.predict()
. - Use implementations from
sklearn
to get the precision, recall scores, and F-beta scores.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
# Set up MLP classifier, train and predict
X_train, X_test, y_train, y_test = ____(
____, ____, test_size = .2, random_state = 0)
clf = ____(hidden_layer_sizes = (16, ),
max_iter = 10, random_state = 0)
y_pred = clf.____(____, _____).____(X_test)
# Evaluate precision and recall
prec = ____(y_test, ____, average = 'weighted')
recall = ____(y_test, ____, average = 'weighted')
fbeta = ____(y_test, ____, ____ = 0.5, average = 'weighted')
print("Precision: %s, Recall: %s, F-beta score: %s" %(prec, recall, fbeta))
This exercise is part of the course
Predicting CTR with Machine Learning in Python
Learn how to predict click-through rates on ads and implement basic machine learning models in Python so that you can see how to better optimize your ads.
Profits can be heavily impacted by your campaign’s CTR. In this chapter, you’ll learn how deep learning can be used to reduce that risk. You’ll focus on multi-layer perceptron (MLP) and neural network models, and learn how these can be used to capture the complex relationship between variables to more accurately predict CTR. Lastly, you’ll explore how to apply the basics of hyperparameter tuning and regularization to classification models.
Exercise 1: Introduction to deep learningExercise 2: Understanding MLPsExercise 3: Beginning modelExercise 4: MLPs for CTRExercise 5: Hyperparameter tuning in deep learningExercise 6: Hyperparameter tuning in MLPsExercise 7: Varying hyperparametersExercise 8: MLP Grid SearchExercise 9: Model evaluationExercise 10: F-beta scoreExercise 11: Low precision and high AUCExercise 12: Precision, ROI, and AUCExercise 13: Model review and comparisonExercise 14: Model comparison warmupExercise 15: Evaluating precision and ROIExercise 16: Total scoringExercise 17: Wrap-up videoWhat is DataCamp?
Learn the data skills you need online at your own pace—from non-coding essentials to data science and machine learning.