Get startedGet started for free

Using LoRA adapters

You work at a startup that provides customer service chatbots that automatically resolve simple questions that customers may have.

You have been tasked with fine-tuning the Maykeye/TinyLLama-v0 language model to answer customer service questions using the bitext dataset. This model will be used in a chatbot that your team provides. The training script is already almost complete, but you wanted to integrate LoRA into your fine-tuning, as it is more efficient and would let your team's training pipeline complete more quickly during deployments.

The relevant model, tokenizer, dataset, and training arguments have been pre-loaded for you in model, tokenizer, dataset, and training_arguments.

This exercise is part of the course

Fine-Tuning with Llama 3

View Course

Exercise instructions

  • Import the LoRA configuration from the associated library.
  • Instantiate LoRA configuration parameters with the defaults given to lora_config.
  • Integrate the LoRA parameters into SFTTrainer.

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

# Import LoRA configuration class
from ____ import ____

# Instantiate LoRA configuration with values
lora_config = ____(
  	r=12,
    lora_alpha=8,
  	task_type="CAUSAL_LM",
    lora_dropout=0.05,
    bias="none",
    target_modules=['q_proj', 'v_proj']
)

trainer = SFTTrainer(
    model=model,
    train_dataset=dataset,
    tokenizer=tokenizer,
    args=training_arguments,
  	# Pass the lora_config to trainer
  	____,
)
Edit and Run Code