Get startedGet started for free

Quiz 3 - Question 2

Using LoRA on a pre-trained foundation model like Gemma with a rank of 64 needs considerably more memory than using LoRA with a rank of 8.

This exercise is part of the course

Google DeepMind: Fine-Tune Your Model

View Course

Hands-on interactive exercise

Turn theory into action with one of our interactive exercises

Start Exercise