1. Wrap-up!
Congratulations on completing the course! Let's wrap-up by reviewing what you've learned!
2. LangChain's core components
We started by talking about the core components of LangChain: models, prompts, chains, agents, and document retrievers. We used open source models from Hugging Face, as well as proprietary models from OpenAI.
3. Chains and agents
We learned how to combine models, prompts, and agents using LCEL chains.
4. Retrieval Augmented Generation (RAG)
Finally, we learned how to integrate external data into LLMs, overcoming the limitations of their training data.
5. LangChain Hub
As you continue your LangChain journey into AI application development, the LangChain Hub will be a fantastic resource along the way. The Hub contains a catalog of prompts for a whole range of different tasks. You can search, use, and add to this massive catalog as you progress.
6. LangChain templates
LangChain Templates are also worth investigating. They are sets of code blocks that are ready to use out of the packet, and they cover many of the most common use cases. They may require minor modifications or additions, but they're often the best place to start.
7. The LangChain ecosystem
The core LangChain package that you've seen in this course is only a piece of the full LangChain ecosystem, which also includes LangSmith, LangServe, and LangGraph, as you've also seen in this course.
LangSmith is used for troubleshooting and evaluating LLM applications, and LangServe is used for deploying these applications to production, while LangGraph, as you know, is used for multi-agent knowledge graphs.
Continue to explore these tools as the ecosystem evolves!
8. Let's practice!
Your journey is only just beginning, so onward!