Overfitting and underfitting
Interpreting model complexity is a great way to evaluate supervised learning performance. Your aim is to produce a model that can interpret the relationship between features and the target variable, as well as generalize well when exposed to new observations.
The training and test sets have been created from the churn_df
dataset and preloaded as X_train
, X_test
, y_train
, and y_test
.
In addition, KNeighborsClassifier
has been imported for you along with numpy
as np
.
This exercise is part of the course
Supervised Learning with scikit-learn
Exercise instructions
- Create
neighbors
as anumpy
array of values from1
up to and including12
. - Instantiate a
KNeighborsClassifier
, with the number of neighbors equal to theneighbor
iterator. - Fit the model to the training data.
- Calculate accuracy scores for the training set and test set separately using the
.score()
method, and assign the results to thetrain_accuracies
andtest_accuracies
dictionaries, respectively, utilizing theneighbor
iterator as the index.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
# Create neighbors
neighbors = np.arange(____, ____)
train_accuracies = {}
test_accuracies = {}
for neighbor in neighbors:
# Set up a KNN Classifier
knn = ____(____=____)
# Fit the model
knn.____(____, ____)
# Compute accuracy
train_accuracies[____] = knn.____(____, ____)
test_accuracies[____] = knn.____(____, ____)
print(neighbors, '\n', train_accuracies, '\n', test_accuracies)