ComeçarComece de graça

scikit-learn's KFold()

You just finished running a colleagues code that creates a random forest model and calculates an out-of-sample accuracy. You noticed that your colleague's code did not have a random state, and the errors you found were completely different than the errors your colleague reported.

To get a better estimate for how accurate this random forest model will be on new data, you have decided to generate some indices to use for KFold cross-validation.

Este exercício faz parte do curso

Model Validation in Python

Ver curso

Instruções do exercício

  • Call the KFold() method to split data using five splits, shuffling, and a random state of 1111.
  • Use the split() method of KFold on X.
  • Print the number of indices in both the train and validation indices lists.

Exercício interativo prático

Experimente este exercício completando este código de exemplo.

from sklearn.model_selection import KFold

# Use KFold
kf = KFold(____, ____, ____)

# Create splits
splits = kf.____(____)

# Print the number of indices
for train_index, val_index in splits:
    print("Number of training indices: %s" % len(____))
    print("Number of validation indices: %s" % len(____))
Editar e executar o código