Get Started

Introduction to large language models (LLMs)

1. Introduction to large language models (LLMs)

Hi. My name is Jasmin. I'm a Senior Data Science Content Developer at DataCamp, and I'll be your instructor for this course on large language models, or LLMs, in Python.

2. Previous knowledge

Before we get started, please ensure you are familiar with navigating the Hugging Face Hub and deep learning models.

3. Introduction to LLMs

Together, we'll explore understanding,

4. Introduction to LLMs

and using LLMs for advanced language tasks.

5. Large language models

LLMs are sophisticated AI models capable of understanding and generating human language text and can handle various complex tasks, including

6. Large language models

summarizing,

7. Large language models

generating,

8. Large language models

and translating text.

9. Large language models

They can even answer questions. Some of today's most popular LLMs are shown here.

10. LLMs

LLMs are typically based on deep learning architectures, transformers being the most common, and are large because they are usually huge neural networks with millions or billions of parameters trained on enormous amounts of text data. In this course, we'll mainly use pre-trained LLMs from Hugging Face, which have already been trained for a particular task.

11. Using Hugging Face models

Here's a reminder of how to use an LLM from Hugging Face with its transformers library. We use a pipeline to specify the task and model. It's best practice to specify both. In this example, we're working with a text summarization task. We input a long body of text about traditional Japanese houses that we want to summarize. We use the max_length parameter to limit the output to 50 words or tokens. Depending on the model, tokenizer, or text, we may end up with unwanted whitespace in our output. We can remove this by adding the argument clean_up_tokenization_spaces to the pipeline and setting it to True. Most of today's summarization models do this automatically.

12. Model outputs

Let's review the model's output. Understanding our model's output structure is helpful here. We can find this structural information on the Hugging Face model card or investigate it ourselves by printing the entire output. For example, the summarized text from this model is found under the summary_text key, so to access that directly, we would instead print summary zero summary_text.

13. Up next

Nice job. Throughout this course, we will build on what we already know about LLMs, perform new tasks with them, and explore how they are built before looking at how to fine-tune and evaluate their performance.

14. Let's practice!

Let's start with some practice.