1. The LangChain ecosystem
Hi! I'm Jonathan Bennion,
2. Meet your instructor...
an ML and AI Engineer, and LangChain contributor, and I'll be teaching this course.
3. Build LLM Apps with LangChain
Since its release in 2022, LangChain has revolutionized the way that AI and LLMs are integrated into products and applications. It's trusted by huge enterprises and new startups alike for building and scaling LLM-powered applications.
4. The LangChain ecosystem
LangChain comprises of a larger ecosystem that also includes LangSmith for deploying applications into production, and LangGraph, for creating AI agents, which we'll introduce later in the course.
5. LangChain integrations
LangChain has lots of partner integrations, so we can access all of our favorite AI models and databases via LangChain.
6. Building LLM apps the LangChain way...
Let's say that we want to create a customer support chatbot that uses LLMs to converse with customers. The chatbot needs to be able to provide product information and recommendations, as well as responding to customers experiencing issues with placing orders. We also want to ensure that responses by the model should be based on existing support articles, which can be easily tweaked and maintained.
7. Building LLM apps the LangChain way...
There's a few different components to manage here: an LLM, which may be from a whole host of different providers, both proprietary and open-source;
8. Building LLM apps the LangChain way...
a mechanism to help the model decide whether to provide product information or advise on troubleshooting issues,
9. Building LLM apps the LangChain way...
a database of customer support articles for the model to use,
10. Building LLM apps the LangChain way...
and a mechanism for finding and integrating them into the chatbot.
Throughout the course, we'll use LangChain to create these components and connect them together in a modular and intuitive way.
11. Prompting OpenAI models
We'll begin our LangChain journey by defining and prompting LLMs, starting with proprietary models from OpenAI.
The ChatOpenAI class from the langchain_openai partner library can be used to define a model to use in LangChain apps. This makes a request to the OpenAI API and returns the response back to the application. OpenAI's API requires an API key, which can also be specified in this class, which will incur some cost for its use.
Like other DataCamp courses you may have taken, you don't have to create an OpenAI account or incur any costs in this course - a placeholder API key will be provided for you.
To prompt this model, we call the .invoke() method on a prompt string.
The output is pretty long here, but the ChatOpenAI class accepts parameters like max_completion_tokens and temperature that you may have encountered elsewhere.
12. Prompting Hugging Face models
If we'd rather work with open-source models downloaded into a local directory, Hugging Face is an excellent choice for finding an appropriate model.
The HuggingFacePipeline class and its .from_model_id() can be used to download a model for a particular task; here, a text generation model.
To pass this model a prompt, we again use the .invoke() method.
Notice that, although we used a completely different model from a different model provider, and downloaded it locally instead of making a request to an API, we only needed to change one class and its arguments.
13. Let's practice!
Let's get started prompting LLMs in LangChain!