Ragas faithfulness evaluation
In this exercise, you'll evaluate the faithfulness of the RAG architecture you created at the end of Chapter 1. This chain has been re-defined for you and is available as through the variable chain
.
You'll use the query
provided, the chain's output, and the retrieved output to evaluate the faithfulness using the ragas
framework.
The classes required have already been imported for you.
This exercise is part of the course
Retrieval Augmented Generation (RAG) with LangChain
Exercise instructions
- Query the
retriever
using thequery
provided and use a list comprehension to extract the document text from each retrieved document. - Define a
ragas
faithfulness chain. - Evaluate the faithfulness of the RAG
chain
available; you'll need to invoke the chain to generate the answer.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
from ragas.metrics import faithfulness
# Query the retriever using the query and extract the document text
query = "How does RAG improve question answering with LLMs?"
retrieved_docs = [doc.____ for doc in retriever.____(____)]
# Define the faithfulness chain
faithfulness_chain = ____(____, llm=llm, embeddings=embeddings)
# Evaluate the faithfulness of the RAG chain
eval_result = ____({
"question": ____,
"answer": ____.____(query),
"contexts": ____
})
print(eval_result)