Get startedGet started for free

Creating the Graph RAG chain

1. Creating the Graph RAG chain

Now that we've created our graph documents, stored them, and queried them using Cypher, let's bring everything together to create a Graph RAG chain with LangChain.

2. Building the Graph RAG architecture

Let's start to put together a Graph RAG architecture using what we know so far. We have our Neo4j graph database,

3. Building the Graph RAG architecture

which we've populated with graph documents by using LLMs to infer the nodes and relationships.

4. Building the Graph RAG architecture

We also know that our application will require a user input, and that Cypher is required to query the Neo4j database and return the relevant documents. So what's missing?

5. Building the Graph RAG architecture

We need a way to translate our user input into a Cypher query, analogous to how we embed user queries in vector RAG applications, and then return the graph result in natural language. How do we do this? You guessed it - LLMs!

6. From user inputs to Cypher queries

So how does the LLM know what Cypher query to generate? We give it access to the graph schema containing the node properties and relationships,

7. From user inputs to Cypher queries

so for a given user input,

8. From user inputs to Cypher queries

it can infer from this information which properties and relationships should be included.

9. GraphCypherQAChain

LangChain provides a GraphCypherQAChain class that performs this translation for us. The user submits a natural language input, it gets translated into a Cypher query, and the generated Cypher is used to query the Neo4j database. Finally, the results obtained from the Neo4j database query are converted back into natural language for the user.

10. GraphCypherQAChain

This GraphCypherQAChain is really two sequential chains under the hood: a generate Cypher chain to generate the Cypher query, and a summarize results chain to return the natural language response.

11. Refresh schema

Let's continue our previous example where we created a graph containing Wikipedia documents about 'large language models'. We can refresh the schema to see the most recent nodes and relationships, which is particularly useful when the graph is being auto-populated.

12. Querying the graph

Now to the fun part. We import GraphCypherQAChain, call the .from_llm() method, and pass it an LLM and our graph database containing the graph documents. We'll also specify verbose=True so we can see the Cypher query and result returned from the database in the output. As before, we use temperature=0 to generate more deterministic responses. We'll invoke the chain on a query to find the most accurate model and save the result.

13. Querying the graph

Let's view the output generated by verbose=True. We can see that the chain successfully created a Cypher query that finds all Model nodes in the database, sorts them by their accuracy property in descending order, and returns the node with the highest accuracy. The context shows the query result, which is also reflected in the chain result.

14. Customization

Note that here we are using the same LLM for generating the Cypher query and the final natural language result. We're also not changing the default prompt templates for either of these steps that are used under the hood. It is possible to use different models for the two steps, and specify our own prompt templates using the arguments shown, which we'll investigate in the next video.

15. Let's practice!

Time to build your own Graph RAG application!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.