MulaiMulai sekarang secara gratis

Building the retrieval chain

Now for the finale of the chapter! You'll create a retrieval chain using LangChain's Expression Language (LCEL). This will combine the vector store containing your embedded document chunks from the RAG paper you loaded earlier, a prompt template, and an LLM so you can begin talking to your documents.

Here's a reminder of the prompt_template you created in the previous exercise, and which is available for you to use:

Use the only the context provided to answer the following question. If you don't know the answer, reply that you are unsure.
Context: {context}
Question: {question}

The vector_store of embedded document chunks that you created previously has also been loaded for you, along with all of the libraries and classes required.

Latihan ini adalah bagian dari kursus

Retrieval Augmented Generation (RAG) with LangChain

Lihat Kursus

Petunjuk latihan

  • Convert the Chroma vector_store into a retriever object for use in the LCEL retrieval chain.
  • Create the LCEL retrieval chain to combine the retriever, the prompt_template, the llm, and a string output parser so it can answer input questions.
  • Invoke the chain on the question provided.

Latihan interaktif praktis

Cobalah latihan ini dengan menyelesaikan kode contoh berikut.

# Convert the vector store into a retriever
retriever = vector_store.____(search_type="similarity", search_kwargs=____)

# Create the LCEL retrieval chain
chain = (
    {"____": ____, "question": ____}
    | ____
    | ____
)

# Invoke the chain
print(chain.____("Who are the authors?"))
Edit dan Jalankan Kode