Organize chatbot outputs with memory
1. Organize chatbot outputs with memory
Super! Now that our workflow is ready,2. Streaming multiple tool outputs
we can test it! We'll first see whether the chatbot can answer a query by picking the correct tool. Then, we'll determine if the chatbot can interleave the user's queries with answers to follow-up questions. To get started,3. Streaming multiple tool outputs
we'll import the AIMessage and HumanMessage modules from the langchain_core.messages module, with a config variable set to one session. We'll create a function called multi_tool_output to handle queries for different tools. First, we'll define an "inputs" dictionary including the user's query as a HumanMessage, passed within the "content" field. Next, we'll stream messages and metadata using app.stream(), which accepts the "inputs" and "config", with "stream_mode" set to "messages" to enable real-time output. For each message, if msg.content is not a HumanMessage, we'll print its contents to access just the chatbot responses. We'll set the "end" parameter to an empty string to avoid excess line breaks, with "flush" set to "True" to ensure live outputs. A line-space will separate the answers for different queries.4. Test with multiple tools
Let's test this function with two queries using different tools. The first query should trigger the palindrome tool, since the question asks "Is `Stella won no wallets` a palindrome?". The second query should trigger the historical events tool, asking "What happened on April 12th, 1955?". For each query, the chatbot will return a direct tool response before the LLM refines the output. Here, the phrase in the first query is labeled a palindrome, with additional comments from the LLM. The next query referencing a date returns information about the polio vaccine breakthrough, which we've abbreviated. Once again, the tool response is followed with an LLM refinement. We can modify this function to handle5. Follow-up questions with multiple tools
follow-up questions as well as multiple tools. We'll call the function user_agent_multiturn, which accepts multiple queries. For each user and chatbot interaction, we'll print the user's query first. Then, we'll stream msg.content and metadata using the app.stream() method, which accepts the query and config parameters, with the stream_mode set to "messages". For each message, we'll filter out human messages to return just the chatbot's responses, before concatenating and printing the responses. Finally, we'll add a new line to keep responses separated. To use this function, we'll pass our queries as a list called "queries", where every second question is a follow-up.6. Full conversation output
We now have a full conversation with different queries answered using different tools, our user and chatbot responses clearly marked, and follow up questions enabled. Now it's your turn to practice7. Let's practice!
with multiple tool conversations!Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.