Get startedGet started for free

Prompts in MCP Servers

1. Prompts in MCP Servers

From resources we'll move on to our final MCP primitive: prompts.

2. What are MCP Prompts?

Prompts are stored and made available through MCP servers to optimize LLMs for specific tasks. This often includes outlining desired workflows and behaviors so users can focus on goal-based prompts.

3. The Need for Prompting

In our timezone converter, one thing we'll need to handle is ambiguous user inputs. Users may forget to include the source or target timezone, or perhaps the information is insufficient for the LLM to infer the timezone, such as asking for the time in Canada, which spans six timezones. In these cases, we don't want the LLM to guess and potentially return incorrect information; instead, we want it to ask for clarification. We can do this by writing a prompt to outline this behavior.

4. The Timezone Converter Prompt

Here's a prompt we can use. It outlines the LLM's task, and captures the desired behavior in two simple rules. Rather than hard-coding this prompt into the LLM, we can expose it from the MCP server, which means we can select different prompts for different tasks. Let's write this into our server!

5. Defining MCP Server Prompts

Like other primitives, the mcp library provides a decorator function for creating prompts: mcp.prompt. Inside the decorator, we need to specify the title of our prompt, which is used similar to a tool's name. We then define a function to load the prompt, which takes the user's input as an argument. This function could load prompts from files or databases if that's where you store them, but we'll use hard-coded strings. This f-string includes the model's task and rules, then inserts the user's request at the end.

6. Local MCP Server: timezone_server.py

Inside our server file, we now have a diverse array of functionality spanning a tool for performing the timezone calculation, a resource for checking that the timezones are supported by the converter tool, and a prompt to make our LLM more resilient to ambiguous user inputs.

7. Client: Listing Prompts

As with other primitives, we can list them and access them on the client side directly from the session. The .list_prompts() method can be used to list the name of each prompt in the server, which we unpack into a list.

8. Client: Listing Prompts

Running the function, we can see the prompt we defined. Notice that the name returned is different from the title we set. You can think of the title as useful metadata that can be extracted from the session and displayed in a user interface, but the function name is used as the unique identifier.

9. Client: Listing Prompts

Because we used the .name attribute when unpacking the prompts, we got the function name and not the title. To retrieve the prompt itself,

10. Client: Retrieving Prompts

we create a separate read_prompt() function that takes the prompt name and user input as arguments. Because this application will only be used for timezone conversion now, we'll set the name of that prompt as the default. To load it, we call the .get_prompt() method on the session, passing it the prompt name, and passing the user's input to the timezone_request argument of the underlying prompt function. From here, we return the prompt's text using a series of attributes.

11. Client: Retrieving Prompts

If we run our read_prompt function on an example user input, we can see it gets appended to the end of our template prompt, so the LLM will have the user's input alongside the task and rules. When we combine this server with an LLM later in the chapter, we'll see whether this prompt offers sufficient resilience to ambiguous user inputs.

12. Let's practice!

For now, time to create a prompt for your currency server!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.