Get startedGet started for free

Updating the weights manually

Now that you know how to access weights and biases, you will manually perform the job of the PyTorch optimizer. While PyTorch automates this, practicing it manually helps you build intuition for how models learn and adjust. This understanding will be valuable when debugging or fine-tuning neural networks.

A neural network of three layers has been created and stored as the model variable. This network has been used for a forward pass and the loss and its derivatives have been calculated. A default learning rate, lr, has been chosen to scale the gradients when performing the update.

This exercise is part of the course

Introduction to Deep Learning with PyTorch

View Course

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

weight0 = model[0].weight
weight1 = model[1].weight
weight2 = model[2].weight

# Access the gradients of the weight of each linear layer
grads0 = ____
grads1 = ____
grads2 = ____
Edit and Run Code