1. Learn
  2. /
  3. Courses
  4. /
  5. Retrieval Augmented Generation (RAG) with LangChain

Connected

Exercise

Ragas context precision evaluation

To start your RAG evaluation journey, you'll begin by evaluating the context precision RAG metric using the ragas framework. Recall that context precision is essentially a measure of how relevant the retrieved documents are to the input query.

In this exercise, you've been provided with an input query, and the documents retrieved by a RAG application, and the ground truth, which was the most appropriate document to retrieve based on the opinion of a human expert. You'll calculate the context precision on these strings before evaluating an actual LangChain RAG chain in the next exercise.

The text generated by the RAG application has been saved to the variable model_response for brevity.

Instructions

100 XP
  • Define a ragas context precision chain.
  • Evaluate the context precision of the retrieved documents provided to the input query; a "ground_truth" has already been inputted.