1. Learn
  2. /
  3. Courses
  4. /
  5. Fine-Tuning with Llama 3

Connected

Exercise

LoRA fine-tuning Llama for customer service

You have been tasked with fine-tuning a language model to answer customer service questions. The Llama models are quite good at question answering, and should work well for this customer service task. Unfortunately, you don't have the compute capacity to conduct regular fine-tuning, and must use LoRA fine-tuning techniques using the bitext dataset.

You want to be able to train Maykeye/TinyLLama-v0. The training script is already almost complete, and you are already provided with the training code, with the exception of the LoRA configuration parameters.

The relevant model, tokenizer, dataset, and training arguments have been pre-loaded for you in model, tokenizer, dataset, and training_arguments.

Instructions

100 XP
  • Add the argument to set your LoRA adapters to rank 2.
  • Set the scaling factor so that it is double your rank.
  • Set the task type used with Llama-style models in your Lora configuration.