LoslegenKostenlos loslegen

Sparse retrieval with BM25

Time to try out a sparse retrieval implementation! You'll create a BM25 retriever to ask questions about an academic paper on RAG, which has already been split into chunks called chunks. An OpenAI chat model and prompt have also been defined as llm and prompt, respectively. You can view the prompt provided by printing it in the console.

Diese Übung ist Teil des Kurses

Retrieval Augmented Generation (RAG) with LangChain

Kurs anzeigen

Anleitung zur Übung

  • Create a BM25 sparse retriever from the documents stored in chunks; configure it to return 5 documents in retrieval.
  • Create an LCEL retrieval chain to integrate the BM25 retriever with the llm and prompt provided.

Interaktive Übung

Vervollständige den Beispielcode, um diese Übung erfolgreich abzuschließen.

# Create a BM25 retriever from chunks
retriever = ____

# Create the LCEL retrieval chain
chain = ({"context": ____, "question": ____}
         | ____
         | ____
         | StrOutputParser()
)

print(chain.invoke("What are knowledge-intensive tasks?"))
Code bearbeiten und ausführen