Creating a RAG chain
Now to bring all the components together in your RAG workflow! You've prepared the documents and ingested them into a Chroma database for retrieval. You created a prompt template to include the retrieved chunks from the academic paper and answer questions.
The prompt template you created in the previous exercise is available as prompt_template
, an OpenAI model has been initialized as llm
, and the code to recreate your retriever
has be included in the script.
This exercise is part of the course
Developing LLM Applications with LangChain
Exercise instructions
- Create an LCEL chain to link
retriever
,prompt_template
, andllm
so the model can retrieve the documents. - Invoke the chain on the
'question'
provided.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
vectorstore = Chroma.from_documents(
docs,
embedding=OpenAIEmbeddings(api_key='', model='text-embedding-3-small'),
persist_directory=os.getcwd()
)
retriever = vectorstore.as_retriever(
search_type="similarity",
search_kwargs={"k": 3}
)
# Create a chain to link retriever, prompt_template, and llm
rag_chain = ({"context": ____, "question": ____}
| ____
| ____)
# Invoke the chain
response = ____("Which popular LLMs were considered in the paper?")
print(response.content)