Get startedGet started for free

Adding memory and conversation

1. Adding memory and conversation

Brilliant work! Next, we'll test our tool use before adding memory for more complexity.

2. Testing tool use

We'll see if our chatbot can respond with the Wikipedia tool.

3. Testing tool use

We'll use our graph display function to check the graph without the try-except block, since we've already tested this feature. Then, we'll set up our streaming function now labeled stream_tool_responses, to see if the tool was correctly called. Inside this function, we'll access the messages stored in the chatbot's events, which should refer to specific tools. Finally, we'll pass a test query, "House of Lords".

4. Visualizing the diagram

The diagram shows the tools node added, with all nodes and edges correctly implemented, reflecting all of the possible conversation outcomes.

5. Streaming the output

The abbreviated output shows the test query passed to the chatbot, along with the Wikipedia tool call in the metadata field called "name". The next response shows a summary generated from the House of Lords Wikipedia page. The chatbot's final answer modifies the content from the Wikipedia summary using the LLM to enhance the clarity of its response. Additional details appear in the response_metadata field. Rather than independently responding to different queries,

6. Adding memory

let's add memory to our chatbot so it can maintain a conversation. First, we'll import MemorySaver from langgraph.checkpoint.memory to handle memory storage within our graph. Then, we'll create a MemorySaver instance called memory, which will act as our checkpoint for storing messages. Next, we'll compile the graph with graph_builder.compile(), passing memory as the checkpoint, so the chatbot retains conversation context.

7. Streaming outputs with memory

To set up a chatbot that remembers context, we'll define a streaming function called stream_memory_responses for a single conversation. First, we'll create a config dictionary with a unique thread ID, "single_session_memory", which lets us maintain message history. Next, we'll use a for loop to stream events from the graph using .stream(), passing in the user’s input message along with the config dictionary. Finally, we'll extract and print the agent's last response by checking for messages in each event's values, allowing the chatbot to answer follow-up questions with context.

8. Generating output with memory

Let's inspect the output! First, the chatbot processes the query and calls the Wikipedia tool to retrieve information on the Colosseum. Then it provides a summary, describing the Colosseum as Rome’s largest ancient amphitheater. Next, the chatbot answers, explaining it was built by Emperor Vespasian for gladiatorial events.

9. Generating output with memory

Since we passed config parameters for a single session of memory, the chatbot answers the follow-up question with the same tool, noting completion under Titus and modifications under Domitian.

10. Let's practice!

Nice work! Let's practice having some more conversation with our chatbot agent!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.