Skip to content

Add MCP support to Agent SDK #40

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from
Closed

Conversation

saqadri
Copy link

@saqadri saqadri commented Mar 12, 2025

This change adds support for Model Context Protocol servers to Agent SDK using the mcp-agent library.

The support works for both streaming and non-streaming use-cases, and doesn't affect existing functionality for users who are using regular tools. In addition, the MCP support is additive -- it works seamlessly with regular tool use (a developer can specify a local function tool, as well as MCP servers, with the tools list extended with the MCP servers' tools).

The core implementation has been added to mcp.py, with surgical updates to Agent and Runner.

I have added several examples to examples/mcp to show the various usage patterns.
 
Detailed change overview:

Model Context Protocol (MCP) in OpenAI Agents SDK

This directory contains examples demonstrating how to use the Model Context Protocol (MCP) with the OpenAI Agents SDK. The integration allows agents to leverage tools from MCP servers alongside native OpenAI Agent SDK tools.

What is MCP?

Model Context Protocol (MCP) gives models access to tools, resources and prompts in a standardized way. It defines a standardized way, enabling interoperability between different AI systems and tool providers.

Using MCP servers in Agents SDK

mcp_servers property on Agent

You can specify the names of MCP servers to give an Agent access to by
setting its mcp_servers property.

The Agent will then automatically aggregate tools from the servers, as well as
any tools specified, and create a single extended list of tools. This means you can seamlessly
use local tools, MCP servers, and other kinds of Agent SDK tools through a single unified syntax.

agent = Agent(
    name="MCP Assistant",
    instructions="You are a helpful assistant with access to MCP tools.",
    tools=[your_other_tools], # Regular tool use for Agent SDK
    mcp_servers=["fetch", "filesystem"]  # Names of MCP servers from your config file (see below)
)

MCP Configuration File

Configure MCP servers by creating an mcp_agent.config.yaml file. You can place this file in your project directory or any parent directory.

Here's an example configuration file that defines three MCP servers:

$schema: "https://raw.githubusercontent.com/lastmile-ai/mcp-agent/main/schema/mcp-agent.config.schema.json"

mcp:
  servers:
    fetch:
      command: "uvx"
      args: ["mcp-server-fetch"]
    filesystem:
      command: "npx"
      args: ["-y", "@modelcontextprotocol/server-filesystem", "."]
    slack:
      command: "npx"
      args: ["-y", "@modelcontextprotocol/server-slack"]

For servers that require sensitive information like API keys, you can:

  1. Define them directly in the config file (not recommended for production)
  2. Use a separate mcp_agent.secrets.yaml file (more secure)
  3. Set them as environment variables

Methods for Configuring MCP

The OpenAI Agents SDK supports several ways to configure MCP servers:

1. Automatic Discovery (Recommended)

The simplest approach lets the SDK automatically find your configuration file if it's named mcp_agent.config.yaml and mcp_agent.secrets.yaml:

from agents import Agent, Runner

# Create an agent that references MCP servers
agent = Agent(
    name="MCP Assistant",
    instructions="You are a helpful assistant with access to MCP tools.",
    mcp_servers=["fetch", "filesystem"]  # Names of servers from your config file
)

# The context object will be automatically populated
class AgentContext:
    pass

result = await Runner.run(agent, input="Hello world", context=AgentContext())

2. Explicit Config Path

You can explicitly specify the path to your config file:

class AgentContext:
    def __init__(self, mcp_config_path=None):
        self.mcp_config_path = mcp_config_path  # Will be used to load the config

context = AgentContext(mcp_config_path="/path/to/mcp_agent.config.yaml")

3. Programmatic Configuration

You can programmatically define your MCP settings:

from mcp_agent.config import MCPSettings, MCPServerSettings

# Define MCP config programmatically
mcp_config = MCPSettings(
    servers={
        "fetch": MCPServerSettings(
            command="uvx",
            args=["mcp-server-fetch"]
        ),
        "filesystem": MCPServerSettings(
            command="npx",
            args=["-y", "@modelcontextprotocol/server-filesystem", "."]
        )
    }
)

class AgentContext:
    def __init__(self, mcp_config=None):
        self.mcp_config = mcp_config

context = AgentContext(mcp_config=mcp_config)

4. Custom Server Registry

You can create and configure your own MCP server registry:

from mcp_agent.mcp_server_registry import ServerRegistry
from mcp_agent.config import get_settings

# Create a custom server registry
settings = get_settings("/path/to/config.yaml")
server_registry = ServerRegistry(config=settings)

# Create an agent with this registry
agent = Agent(
    name="Custom Registry Agent",
    instructions="You have access to custom MCP servers.",
    mcp_servers=["fetch", "filesystem"],
    mcp_server_registry=server_registry  # Use custom registry
)

Examples

Basic Hello World

A simple example demonstrating how to create an agent that uses MCP tools:

# Create an agent with MCP servers
agent = Agent(
    name="MCP Assistant",
    instructions="You are a helpful assistant with access to tools.",
    tools=[get_current_weather],  # Local tools
    mcp_servers=["fetch", "filesystem"],  # MCP servers
)

# Run the agent
result = await Runner.run(
    agent,
    input="What's the weather in Miami? Also, can you fetch the OpenAI website?",
    context=AgentContext(),
)

print(result.response.value)

See hello_world.py for the complete example.

Streaming Responses

To stream responses instead of waiting for the complete result:

result = Runner.run_streamed(  # Note: No await here
    agent,
    input="Print the first paragraph of https://openai.github.io/openai-agents-python/",
    context=context,
)

# Stream the events
async for event in result.stream_events():
    if event.type == "raw_response_event" and isinstance(event.data, ResponseTextDeltaEvent):
        print(event.data.delta, end="", flush=True)

See hello_world_streamed.py for the complete example.

Slack Integration

An example showing how to use MCP for Slack integration:

agent = Agent(
    name="Slack Finder",
    instructions="""You are an agent with access to Slack conversations.""",
    mcp_servers=["filesystem", "slack"],  # Include the slack MCP server
)

# Search for messages
result = await Runner.run(
    agent, 
    input="What was the last message in the general channel?",
    context=context,
)

See slack.py for the complete example.

@saqadri saqadri mentioned this pull request Mar 12, 2025
@AsharibAli
Copy link

awesome work! @saqadri

Copy link
Collaborator

@rm-openai rm-openai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR. I'm unable to accept it as is for a few reasons:

  1. We don't want to change the min python version required.
  2. We want to keep this library light and with very few dependencies, in both the API and the implementation.

I'm supportive of the idea of MCP here - but I think this might not be the ideal approach. I'd recommend extracting this into an extension repository that users can use to extend their agents, rather than baking it into the SDK this way.

@saqadri
Copy link
Author

saqadri commented Mar 12, 2025

Thanks for the PR. I'm unable to accept it as is for a few reasons:

@rm-openai thanks for your review!

  1. We don't want to change the min python version required.

The Python dep can be reversed back to 3.9 with a minor update to MCP-agent, so that shouldn't be a problem.

  1. We want to keep this library light and with very few dependencies, in both the API and the implementation.

I'm supportive of the idea of MCP here - but I think this might not be the ideal approach. I'd recommend extracting this into an extension repository that users can use to extend their agents, rather than baking it into the SDK this way.

This is reasonable. Would you be open to having an extension package in this repository itself, or would you prefer a separate repo altogether (with perhaps a comment in docs or readme about the MCP extension)?

Happy to address both these points today.

@saqadri
Copy link
Author

saqadri commented Mar 12, 2025

Quick update -- I'm working on updating this to be an extension package. I'll try to complete the work by today, and will keep the openai-agents-mcp pypi package name, but instead of its current approach (a fork), it'll add an MCPAgent class which adds MCP support for folks needing it.

@rm-openai
Copy link
Collaborator

Thank you. For now let's leave it as a separate repo. We're gonna circle up internally on the best way to support MCP and get back to you. Closing this PR as a result, but if there are any follow ups, feel free to reopen or create an issue.

@rm-openai rm-openai closed this Mar 12, 2025
@liurenhao
Copy link

liurenhao commented Mar 13, 2025

This PR has been really helpful to me, but I ran into an issue during testing:

openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid schema for function 'filesystem-set_permissions': In context=('properties', 'permissions'), 'additionalProperties' is required to be supplied and to be false.", 'type': 'invalid_request_error', 'param': 'tools[22].parameters', 'code': 'invalid_function_parameters'}}

After some debugging, I found that the problem was caused by nested sub-objects not having additionalProperties set. I fixed it with the following changes:

    # OpenAI requires all properties to be in the required array
    if "type" in result and result["type"] == "object" and "properties" in result:
        # Get all property names
        property_names = list(result.get("properties", {}).keys())
        
        # Set required field to include all properties
        if property_names:
            result["required"] = property_names

        # fix code  
        result["additionalProperties"] = False

Hope this helps!

@saqadri
Copy link
Author

saqadri commented Mar 13, 2025

This PR has been really helpful to me, but I ran into an issue during testing:

openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid schema for function 'filesystem-set_permissions': In context=('properties', 'permissions'), 'additionalProperties' is required to be supplied and to be false.", 'type': 'invalid_request_error', 'param': 'tools[22].parameters', 'code': 'invalid_function_parameters'}}

After some debugging, I found that the problem was caused by nested sub-objects not having additionalProperties set. I fixed it with the following changes:

    # OpenAI requires all properties to be in the required array
    if "type" in result and result["type"] == "object" and "properties" in result:
        # Get all property names
        property_names = list(result.get("properties", {}).keys())
        
        # Set required field to include all properties
        if property_names:
            result["required"] = property_names

        # fix code  
        result["additionalProperties"] = False

Hope this helps!

@liurenhao thanks! I'll add this fix into the openai-agents-mcp package. I'm working to turn that into an extension module (will publish an update to pypi tomorrow). Can you please add an issue here https://github.com/lastmile-ai/openai-agents-mcp

@nikhilmaddirala
Copy link

@saqadri Have you published this yet?

@saqadri
Copy link
Author

saqadri commented Mar 19, 2025

@saqadri Have you published this yet?

@nikhilmaddirala I just published this extension package: https://github.com/lastmile-ai/openai-agents-mcp

Please try it out and let me know how it goes:

Installation

uv add openai-agents-mcp

Usage

In order to use Agents SDK with MCP, simply replace the following import:

- from agents import Agent
+ from agents_mcp import Agent

With that you can instantiate an Agent with mcp_servers in addition to tools (which continue to work like before).

    from agents_mcp import Agent

    # Create an agent with specific MCP servers you want to use
    # These must be defined in your mcp_agent.config.yaml file
    agent = Agent(
        name="MCP Agent",
        instructions="""You are a helpful assistant with access to both local/OpenAI tools and tools from MCP servers. Use these tools to help the user.""",
        # Local/OpenAI tools
        tools=[get_current_weather],
        # Specify which MCP servers to use
        # These must be defined in your mcp_agent config
        mcp_servers=["fetch", "filesystem"],
    )

Then define an mcp_agent.config.yaml, with the MCP server configuration:

mcp:
  servers:
    fetch:
      command: npx
      args: ["-y", "@modelcontextprotocol/server-fetch"]
    filesystem:
      command: npx
      args: ["-y", "@modelcontextprotocol/server-filesystem", "."]

The rest of the Agents SDK works exactly as before.

@nikhilmaddirala
Copy link

@nikhilmaddirala I just published this extension package: https://github.com/lastmile-ai/openai-agents-mcp

@saqadri Thank you! Can you clarify what exactly you mean by "extension package"? Your repo appears to be a fork.

@saqadri
Copy link
Author

saqadri commented Mar 20, 2025

@nikhilmaddirala I just published this extension package: https://github.com/lastmile-ai/openai-agents-mcp

@saqadri Thank you! Can you clarify what exactly you mean by "extension package"? Your repo appears to be a fork.

@nikhilmaddirala the readme goes into detail. It started as a fork but now is just extending the Agent class from the base openai-agents package to add mcp support.

@nikhilmaddirala
Copy link

@nikhilmaddirala I just published this extension package: https://github.com/lastmile-ai/openai-agents-mcp

@saqadri Thank you! Can you clarify what exactly you mean by "extension package"? Your repo appears to be a fork.

@nikhilmaddirala the readme goes into detail. It started as a fork but now is just extending the Agent class from the base openai-agents package to add mcp support.

Thanks, I did read it, and I understand how your repo works, but I'm still not understanding what "extension package" means and how it differs from a fork.

@saqadri
Copy link
Author

saqadri commented Mar 20, 2025

Thanks, I did read it, and I understand how your repo works, but I'm still not understanding what "extension package" means and how it differs from a fork.

@nikhilmaddirala by extension package I was just trying to say that it can be used alongside the openai-agents package, instead of in lieu of it. When I first implemented this feature, I had forked/cloned the entire agents SDK codebase, and made changes to Agent, Runner, and added mcp utils to a copy of the sdk itself. That wasn't sustainable as I'd have to constantly merge in changes from the main project, and was effectively bifurcating Agents SDK. The new implementation doesn't do that -- I depend on the openai-agents package, and extend the Agent class with an MCP-compatible derived class. So now a project can depend on openai-agents as well as openai-agents-mcp, where the latter will enable MCP functionality but still leverage the core components of the original SDK.

Hope this helps!

@nikhilmaddirala
Copy link

Thanks, I did read it, and I understand how your repo works, but I'm still not understanding what "extension package" means and how it differs from a fork.

@nikhilmaddirala by extension package I was just trying to say that it can be used alongside the openai-agents package, instead of in lieu of it. When I first implemented this feature, I had forked/cloned the entire agents SDK codebase, and made changes to Agent, Runner, and added mcp utils to a copy of the sdk itself. That wasn't sustainable as I'd have to constantly merge in changes from the main project, and was effectively bifurcating Agents SDK. The new implementation doesn't do that -- I depend on the openai-agents package, and extend the Agent class with an MCP-compatible derived class. So now a project can depend on openai-agents as well as openai-agents-mcp, where the latter will enable MCP functionality but still leverage the core components of the original SDK.

Hope this helps!

This explanation is very helpful, thank you! :)

@yanmxa
Copy link

yanmxa commented Mar 21, 2025

https://github.com/yanmxa/litemcp

This one is a lightweight implementation to support the Agent SDK with MCP in a non-invasive way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants