Get startedGet started for free

Building a hybrid retrieval chain

1. Building a hybrid retrieval chain

By now, we've covered two different graph models, lexical graphs and domain graphs, and seen how to add embeddings to our graphs. Let's round-out the chapter by completing a retrieval chain that combines graphs and embeddings.

2. Combining retrieval

To get the best from both approaches, we need to create a chain that uses both vector search and text-to-cypher to retrieve relevant information.

3. Combining retrieval

This information can then be injected into the prompt along with instructions on how to interpret it and respond to the user. To achieve this, we will need to use the LangChain RunnableLambda() function.

4. Making functions runnable

RunnableLambda() converts a regular Python function into an object that can be invoked, batched, or streamed in an LCEL chain. The class expects one position argument, a function, either in-line or defined elsewhere. It takes the input from the chain, executes the function, and returns the result. RunnableLambda can then be treated like any other LCEL object. We can test this by calling the .invoke() method to execute the lambda, extracting the "input" key from the input dictionary and returning the value multiplied by two.

5. Runnable passthroughs

The RunnablePassthrough class can be used within an LCEL chain to manipulate the current state. The assign method retains the current state unchanged and assigns new keys based on the named argument. Invoking this RunnablePassthrough will append a new key to the input dictionary called doubled, holding the original input value multiplied by two. These two runnables will be key in our retrieval chain.

6. Building a Q&A prompt

We create a GraphRAG QA prompt template that begins with the instruction: You are a helpful assistant, answering questions about Romeo and Juliet.

7. Building a Q&A prompt

The prompt will need to include the results of the vector search using the vectors placeholder,

8. Building a Q&A prompt

and the records returned from the knowledge graph. As the knowledge graph has been built with domain-specific knowledge, we include instructions to treat this information as the source of truth. If the information is not included in the result, fall back to the text retrieved using vector search. Then, if the information is not included in either, refuse to answer.

9. Building a Q&A Chain

For brevity, we've left out the code to the graph, vector index, and text-to-Cypher chain from earlier in the course; we'll assumed they've already been defined. To combine the methods together, we use RunnablePassthrough's .assign() method to assign new keys to the chain's input. First, the RunnableLambda extracts the "input" key and invokes the vector retriever, assigning the results to vectors. Then, the text-to-cypher chain is used to convert the input to a Cypher query, and execute the query using the Neo4jGraph object. This is assigned to records. The input, vectors, and records are then ready to be injected into the QA prompt using an LCEL pipe, passed to an LLM, and parsed as a string.

10. Invoking the Q&A Chain

We have created a chain that handles the entire Graph RAG retrieval process from start to finish.

11. Let's practice!

Let's build one for our Romeo and Juliet knowledge graph.

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.