Get startedGet started for free

Setting up Llama training arguments

You are tasked with working with the Llama model used in a customer service chatbot by fine-tuning it on customer service data purpose built for question-answering. To ensure the best performance out of these models, your team will fine-tune a Llama model for this task using the bitext dataset.

You want to do a test run of the training loop to check if the training script works. So, you want to start by setting a small learning rate and limit the training to a handful of steps in your training arguments.

This exercise is part of the course

Fine-Tuning with Llama 3

View Course

Exercise instructions

  • Import and instantiate the helper class to store your training arguments.
  • Set the training argument for learning rate to a value of 2e-3.

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

# Load helper class for the training arguments from the correct library
from ____ import ____ 
training_arguments = ____(
  	# Set learning rate
    ____=____, 
    warmup_ratio=0.03,
  	num_train_epochs=3,
  	output_dir='/tmp',
    per_device_train_batch_size=1,
    gradient_accumulation_steps=1,
    save_steps=10,
    logging_steps=2,
    lr_scheduler_type='constant',
    report_to='none'
)
Edit and Run Code