ComenzarEmpieza gratis

LLM Tool Use in MCP Servers

You've built an MCP server containing a tool for converting currencies using up-to-date exchange rates. Integrating it with an LLM will give it the ability to accurately answer questions about currencies and exchange rates—something it can't do by default.

The bulk of the code is provided for you here, as the main focus should be on understanding the workflow rather than syntax.

Este ejercicio forma parte del curso

Introduction to Model Context Protocol (MCP)

Ver curso

Instrucciones del ejercicio

  • Send the user query (user_query) and formatted list of tools (openai_tools) to the OpenAI LLM.
  • Call the MCP tool chosen by the LLM, using the name and arguments extracted from the response output (output).
  • Send the result (result) back to the OpenAI model for final response.

Ejercicio interactivo práctico

Prueba este ejercicio y completa el código de muestra.

from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client

async def call_openai_llm(user_query: str):
    """Call OpenAI LLM with MCP tools."""
    
    print(f"\nUser: {user_query}\n")

    mcp_tools = await get_tools_from_mcp()
    
    openai_tools = []
    for tool in mcp_tools:
        openai_tool = {
            "type": "function",
            "name": tool.name,
            "description": tool.description or "",
            "parameters": tool.inputSchema,
        }
        openai_tools.append(openai_tool)
    
    # Send the user query and formatted tools to the LLM
    client = AsyncOpenAI(api_key="")

    response = await client.responses.create(
        model="gpt-4o-mini",
        input=____,
        tools=____,
    )

    output = response.output[0]

    if output.type == "function_call":
        args = json.loads(output.arguments)
        name = output.name

        print(f"Model decided to call: {name}")
        print(f"Arguments: {args}\n")

        # Call the MCP tool
        result = await call_mcp_tool(____, ____)

        # Send the result back to OpenAI for final response
        followup = await client.responses.create(
            model="gpt-4o-mini",
            input=[
                {"role": "user", "content": user_query},
                output,
                {
                    "type": "function_call_output",
                    "call_id": output.call_id,
                    "output": ____,
                },
            ],
        )

        if followup.output and followup.output[0].type == "message":
            print(f"\nAssistant: {followup.output[0].content[0].text}")
            return str(followup.output[0].content[0].text)
        else:
            print("No follow-up message from model.")

    elif output.type == "message":
        print(f"\nAssistant: {output.content[0].text}")
        return str(output.content[0].text)
    else:
        print(f"Unhandled output type: {output.type}")


if __name__ == "__main__":
    asyncio.run(call_openai_llm("How much is 250 US dollars in euros?"))
Editar y ejecutar código