1. Learn
  2. /
  3. Courses
  4. /
  5. Fine-Tuning with Llama 3

Connected

Exercise

Using LoRA adapters

You work at a startup that provides customer service chatbots that automatically resolve simple questions that customers may have.

You have been tasked with fine-tuning the Maykeye/TinyLLama-v0 language model to answer customer service questions using the bitext dataset. This model will be used in a chatbot that your team provides. The training script is already almost complete, but you wanted to integrate LoRA into your fine-tuning, as it is more efficient and would let your team's training pipeline complete more quickly during deployments.

The relevant model, tokenizer, dataset, and training arguments have been pre-loaded for you in model, tokenizer, dataset, and training_arguments.

Instructions

100 XP
  • Import the LoRA configuration from the associated library.
  • Instantiate LoRA configuration parameters with the defaults given to lora_config.
  • Integrate the LoRA parameters into SFTTrainer.