Creating one-hot encoded labels
One-hot encoding is a technique that turns a single integer label into a vector of N elements, where N is the number of classes in your dataset. This vector only contains zeros and ones. In this exercise, you'll create the one-hot encoded vector of the label y
provided.
You'll practice doing this manually, and then make your life easier by leveraging the help of PyTorch! Your dataset contains three classes, and the class labels range from 0 to 2 (e.g., 0, 1, 2).
NumPy is already imported as np
, and torch.nn.functional
as F
. The torch
package is also imported.
This is a part of the course
“Introduction to Deep Learning with PyTorch”
Exercise instructions
- Manually create a one-hot encoded vector of the ground truth label
y
by filling in the NumPy array provided. - Create a one-hot encoded vector of the ground truth label
y
using PyTorch.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
y = 1
num_classes = 3
# Create the one-hot encoded vector using NumPy
one_hot_numpy = np.array([____, ____, ____])
# Create the one-hot encoded vector using PyTorch
one_hot_pytorch = ____
This exercise is part of the course
Introduction to Deep Learning with PyTorch
Learn how to build your first neural network, adjust hyperparameters, and tackle classification and regression problems in PyTorch.
To train a neural network in PyTorch, you will first need to understand the job of a loss function. You will then realize that training a network requires minimizing that loss function, which is done by calculating gradients. You will learn how to use these gradients to update your model's parameters, and finally, you will write your first training loop.
Exercise 1: Running a forward passExercise 2: Building a binary classifier in PyTorchExercise 3: From regression to multi-class classificationExercise 4: Using loss functions to assess model predictionsExercise 5: Creating one-hot encoded labelsExercise 6: Calculating cross entropy lossExercise 7: Using derivatives to update model parametersExercise 8: Accessing the model parametersExercise 9: Updating the weights manuallyExercise 10: Using the PyTorch optimizerExercise 11: Writing our first training loopExercise 12: Using the MSELossExercise 13: Writing a training loopWhat is DataCamp?
Learn the data skills you need online at your own pace—from non-coding essentials to data science and machine learning.