MCP and LLMs: Prompts and Resources
1. MCP and LLMs: Prompts and Resources
In the previous video, we integrated MCP tools with an LLM to give it the ability to use functions to expand its capabilities. In this video, we'll add the other two primitives—resources and prompts—so the LLM has the right context and behavior.2. Resources and Prompts in the LLM Flow
Resources and prompts don't get "called" by the LLM like tools. Instead, the client fetches them from the MCP server and injects them into the request. Resources provide read-only context, like a list of valid timezone locations, and prompts provide instruction templates to set the model's behavior and optimize it for the task. We'll be continuing with the OpenAI Responses API from the previous lesson.3. The Prompt-Resource Workflow
We'll be integrating a prompt and resource together in a single workflow, but you can separate these out if your use case doesn't require both. First, the client will fetch our resource—for our timezone converter, that's the list of supported timezones. This resource will allow us to validate that the LLM is requesting valid timezones for the tool.4. The Prompt-Resource Workflow
Second, the client fetches the prompt with the user's message so the template and request are combined into a single prompt.5. The Prompt-Resource Workflow
Third, the client combines the prompts and resource and sends them to the LLM as context so the model understands the supported locations, task, and rules in one place.6. The Prompt-Resource Workflow
Finally, we get a response. If the request is ambiguous, it can ask for clarification using the prompt's rules. If it's clear, it can call the timezone tool; the resource context helps it use valid locations.7. Local MCP Server: timezone_server.py
Our timezone server exposes a tool, a resource containing valid timezone names stored in a txt file, and a prompt called convert_timezone that instructs the model to ask for clarification when user inputs are ambiguous.8. Client Helper Functions
On the client side, we'll use two async functions we defined earlier. Recall that read_resource() takes a URI and returns the resource's contents—here, the supported locations list. read_prompt() takes a prompt template name and a user input to inject. We'll use these to retrieve these primitives for use with the LLM.9. 1. Fetch Resource and Prompt
We'll define a helper function that opens a session and fetches two things. First we read the locations.txt resource and extract the text from it. Second, we retrieve the prompt and inject the user's input to the timezone_request argument. We extract the raw text from this populated prompt, and return this alongside the resource.10. 2. Build the System Message and Call the LLM
We'll call the LLM with this context in a second function. We get the resource and prompt text from MCP, then combine them into one message. This message can then be sent to the model along with the tools list. We've left out the tools code from the previous video for brevity, but it'll use the same get_tools_from_mcp() function and reformat it into a list of dictionaries.11. 3. Handling the Response
Checking the response output. If it's a message, the model replies directly. This covers unrelated or ambiguous queries. If it's a function call, we call the MCP tool and send the result back in a follow-up request, same as before. The resource and prompt shape the context, but the tool path is unchanged.12. Example: Ambiguous Request
With an ambiguous query like "What time is it in Canada?", the prompt's rules tell the model to ask for clarification instead of guessing. The model doesn't call the tool; it responds with a question. That's the prompt at work.13. Example: Clear Request
With a clear request, the model has the supported locations from the resource and the task from the prompt. It calls the timezone tool with valid arguments and returns the answer.14. Recap: Resources and Prompts with the LLM
To recap: the client fetches the resource and prompt from the MCP server, injects both as context, then sends the user query and tools to the LLM. The model replies with a message or a tool call. Resources give it data; prompts give it behavior. All three MCP primitives work together.15. Let's practice!
Time to give this a go!Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.