Skip to content

Commit 71c7d4f

Browse files
authored
Merge branch 'openai:main' into main
2 parents 31dd691 + 6e078bf commit 71c7d4f

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

42 files changed

+1356
-93
lines changed

AGENTS.md

Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
Welcome to the OpenAI Agents SDK repository. This file contains the main points for new contributors.
2+
3+
## Repository overview
4+
5+
- **Source code**: `src/agents/` contains the implementation.
6+
- **Tests**: `tests/` with a short guide in `tests/README.md`.
7+
- **Examples**: under `examples/`.
8+
- **Documentation**: markdown pages live in `docs/` with `mkdocs.yml` controlling the site.
9+
- **Utilities**: developer commands are defined in the `Makefile`.
10+
- **PR template**: `.github/PULL_REQUEST_TEMPLATE/pull_request_template.md` describes the information every PR must include.
11+
12+
## Local workflow
13+
14+
1. Format, lint and type‑check your changes:
15+
16+
```bash
17+
make format
18+
make lint
19+
make mypy
20+
```
21+
22+
2. Run the tests:
23+
24+
```bash
25+
make tests
26+
```
27+
28+
To run a single test, use `uv run pytest -s -k <test_name>`.
29+
30+
3. Build the documentation (optional but recommended for docs changes):
31+
32+
```bash
33+
make build-docs
34+
```
35+
36+
Coverage can be generated with `make coverage`.
37+
38+
## Snapshot tests
39+
40+
Some tests rely on inline snapshots. See `tests/README.md` for details on updating them:
41+
42+
```bash
43+
make snapshots-fix # update existing snapshots
44+
make snapshots-create # create new snapshots
45+
```
46+
47+
Run `make tests` again after updating snapshots to ensure they pass.
48+
49+
## Style notes
50+
51+
- Write comments as full sentences and end them with a period.
52+
53+
## Pull request expectations
54+
55+
PRs should use the template located at `.github/PULL_REQUEST_TEMPLATE/pull_request_template.md`. Provide a summary, test plan and issue number if applicable, then check that:
56+
57+
- New tests are added when needed.
58+
- Documentation is updated.
59+
- `make lint` and `make format` have been run.
60+
- The full test suite passes.
61+
62+
Commit messages should be concise and written in the imperative mood. Small, focused commits are preferred.
63+
64+
## What reviewers look for
65+
66+
- Tests covering new behaviour.
67+
- Consistent style: code formatted with `ruff format`, imports sorted, and type hints passing `mypy`.
68+
- Clear documentation for any public API changes.
69+
- Clean history and a helpful PR description.

docs/ja/mcp.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,12 +12,13 @@ Agents SDK は MCP をサポートしており、これにより幅広い MCP
1212

1313
## MCP サーバー
1414

15-
現在、MCP 仕様では使用するトランスポート方式に基づき 2 種類のサーバーが定義されています。
15+
現在、MCP 仕様では使用するトランスポート方式に基づき 3 種類のサーバーが定義されています。
1616

17-
1. **stdio** サーバー: アプリケーションのサブプロセスとして実行されます。ローカルで動かすイメージです。
17+
1. **stdio** サーバー: アプリケーションのサブプロセスとして実行されます。ローカルで動かすイメージです。
1818
2. **HTTP over SSE** サーバー: リモートで動作し、 URL 経由で接続します。
19+
3. **Streamable HTTP** サーバー: MCP 仕様に定義された Streamable HTTP トランスポートを使用してリモートで動作します。
1920

20-
これらのサーバーへは [`MCPServerStdio`][agents.mcp.server.MCPServerStdio][`MCPServerSse`][agents.mcp.server.MCPServerSse] クラスを使用して接続できます。
21+
これらのサーバーへは [`MCPServerStdio`][agents.mcp.server.MCPServerStdio][`MCPServerSse`][agents.mcp.server.MCPServerSse][`MCPServerStreamableHttp`][agents.mcp.server.MCPServerStreamableHttp] クラスを使用して接続できます。
2122

2223
たとえば、[公式 MCP filesystem サーバー](https://www.npmjs.com/package/@modelcontextprotocol/server-filesystem)を利用する場合は次のようになります。
2324

@@ -46,7 +47,7 @@ agent=Agent(
4647

4748
## キャッシュ
4849

49-
エージェントが実行されるたびに、MCP サーバーへ `list_tools()` が呼び出されます。サーバーがリモートの場合は特にレイテンシが発生します。ツール一覧を自動でキャッシュしたい場合は、[`MCPServerStdio`][agents.mcp.server.MCPServerStdio][`MCPServerSse`][agents.mcp.server.MCPServerSse] の両方に `cache_tools_list=True` を渡してください。ツール一覧が変更されないと確信できる場合のみ使用してください。
50+
エージェントが実行されるたびに、MCP サーバーへ `list_tools()` が呼び出されます。サーバーがリモートの場合は特にレイテンシが発生します。ツール一覧を自動でキャッシュしたい場合は、[`MCPServerStdio`][agents.mcp.server.MCPServerStdio][`MCPServerSse`][agents.mcp.server.MCPServerSse][`MCPServerStreamableHttp`][agents.mcp.server.MCPServerStreamableHttp] の各クラスに `cache_tools_list=True` を渡してください。ツール一覧が変更されないと確信できる場合のみ使用してください。
5051

5152
キャッシュを無効化したい場合は、サーバーで `invalidate_tools_cache()` を呼び出します。
5253

docs/ja/tools.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,10 @@ OpenAI は [`OpenAIResponsesModel`][agents.models.openai_responses.OpenAIRespons
1717
- [`WebSearchTool`][agents.tool.WebSearchTool] はエージェントに Web 検索を行わせます。
1818
- [`FileSearchTool`][agents.tool.FileSearchTool] は OpenAI ベクトルストアから情報を取得します。
1919
- [`ComputerTool`][agents.tool.ComputerTool] はコンピュータ操作タスクを自動化します。
20+
- [`CodeInterpreterTool`][agents.tool.CodeInterpreterTool] はサンドボックス環境でコードを実行します。
21+
- [`HostedMCPTool`][agents.tool.HostedMCPTool] はリモート MCP サーバーのツールをモデルから直接利用できるようにします。
22+
- [`ImageGenerationTool`][agents.tool.ImageGenerationTool] はプロンプトから画像を生成します。
23+
- [`LocalShellTool`][agents.tool.LocalShellTool] はローカルマシンでシェルコマンドを実行します。
2024

2125
```python
2226
from agents import Agent, FileSearchTool, Runner, WebSearchTool

docs/mcp.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,9 @@ Currently, the MCP spec defines two kinds of servers, based on the transport mec
1212

1313
1. **stdio** servers run as a subprocess of your application. You can think of them as running "locally".
1414
2. **HTTP over SSE** servers run remotely. You connect to them via a URL.
15+
3. **Streamable HTTP** servers run remotely using the Streamable HTTP transport defined in the MCP spec.
1516

16-
You can use the [`MCPServerStdio`][agents.mcp.server.MCPServerStdio] and [`MCPServerSse`][agents.mcp.server.MCPServerSse] classes to connect to these servers.
17+
You can use the [`MCPServerStdio`][agents.mcp.server.MCPServerStdio], [`MCPServerSse`][agents.mcp.server.MCPServerSse], and [`MCPServerStreamableHttp`][agents.mcp.server.MCPServerStreamableHttp] classes to connect to these servers.
1718

1819
For example, this is how you'd use the [official MCP filesystem server](https://www.npmjs.com/package/@modelcontextprotocol/server-filesystem).
1920

@@ -42,7 +43,7 @@ agent=Agent(
4243

4344
## Caching
4445

45-
Every time an Agent runs, it calls `list_tools()` on the MCP server. This can be a latency hit, especially if the server is a remote server. To automatically cache the list of tools, you can pass `cache_tools_list=True` to both [`MCPServerStdio`][agents.mcp.server.MCPServerStdio] and [`MCPServerSse`][agents.mcp.server.MCPServerSse]. You should only do this if you're certain the tool list will not change.
46+
Every time an Agent runs, it calls `list_tools()` on the MCP server. This can be a latency hit, especially if the server is a remote server. To automatically cache the list of tools, you can pass `cache_tools_list=True` to [`MCPServerStdio`][agents.mcp.server.MCPServerStdio], [`MCPServerSse`][agents.mcp.server.MCPServerSse], and [`MCPServerStreamableHttp`][agents.mcp.server.MCPServerStreamableHttp]. You should only do this if you're certain the tool list will not change.
4647

4748
If you want to invalidate the cache, you can call `invalidate_tools_cache()` on the servers.
4849

docs/tools.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,10 @@ OpenAI offers a few built-in tools when using the [`OpenAIResponsesModel`][agent
1313
- The [`WebSearchTool`][agents.tool.WebSearchTool] lets an agent search the web.
1414
- The [`FileSearchTool`][agents.tool.FileSearchTool] allows retrieving information from your OpenAI Vector Stores.
1515
- The [`ComputerTool`][agents.tool.ComputerTool] allows automating computer use tasks.
16+
- The [`CodeInterpreterTool`][agents.tool.CodeInterpreterTool] lets the LLM execute code in a sandboxed environment.
17+
- The [`HostedMCPTool`][agents.tool.HostedMCPTool] exposes a remote MCP server's tools to the model.
18+
- The [`ImageGenerationTool`][agents.tool.ImageGenerationTool] generates images from a prompt.
19+
- The [`LocalShellTool`][agents.tool.LocalShellTool] runs shell commands on your machine.
1620

1721
```python
1822
from agents import Agent, FileSearchTool, Runner, WebSearchTool

docs/tracing.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -115,3 +115,4 @@ To customize this default setup, to send traces to alternative or additional bac
115115
- [Langfuse](https://langfuse.com/docs/integrations/openaiagentssdk/openai-agents)
116116
- [Langtrace](https://docs.langtrace.ai/supported-integrations/llm-frameworks/openai-agents-sdk)
117117
- [Okahu-Monocle](https://github.com/monocle2ai/monocle)
118+
- [Galileo](https://v2docs.galileo.ai/integrations/openai-agent-integration#openai-agent-integration)

examples/hosted_mcp/__init__.py

Whitespace-only changes.

examples/hosted_mcp/approvals.py

Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
import argparse
2+
import asyncio
3+
4+
from agents import (
5+
Agent,
6+
HostedMCPTool,
7+
MCPToolApprovalFunctionResult,
8+
MCPToolApprovalRequest,
9+
Runner,
10+
)
11+
12+
"""This example demonstrates how to use the hosted MCP support in the OpenAI Responses API, with
13+
approval callbacks."""
14+
15+
16+
def approval_callback(request: MCPToolApprovalRequest) -> MCPToolApprovalFunctionResult:
17+
answer = input(f"Approve running the tool `{request.data.name}`? (y/n) ")
18+
result: MCPToolApprovalFunctionResult = {"approve": answer == "y"}
19+
if not result["approve"]:
20+
result["reason"] = "User denied"
21+
return result
22+
23+
24+
async def main(verbose: bool, stream: bool):
25+
agent = Agent(
26+
name="Assistant",
27+
tools=[
28+
HostedMCPTool(
29+
tool_config={
30+
"type": "mcp",
31+
"server_label": "gitmcp",
32+
"server_url": "https://gitmcp.io/openai/codex",
33+
"require_approval": "always",
34+
},
35+
on_approval_request=approval_callback,
36+
)
37+
],
38+
)
39+
40+
if stream:
41+
result = Runner.run_streamed(agent, "Which language is this repo written in?")
42+
async for event in result.stream_events():
43+
if event.type == "run_item_stream_event":
44+
print(f"Got event of type {event.item.__class__.__name__}")
45+
print(f"Done streaming; final result: {result.final_output}")
46+
else:
47+
res = await Runner.run(agent, "Which language is this repo written in?")
48+
print(res.final_output)
49+
50+
if verbose:
51+
for item in result.new_items:
52+
print(item)
53+
54+
55+
if __name__ == "__main__":
56+
parser = argparse.ArgumentParser()
57+
parser.add_argument("--verbose", action="store_true", default=False)
58+
parser.add_argument("--stream", action="store_true", default=False)
59+
args = parser.parse_args()
60+
61+
asyncio.run(main(args.verbose, args.stream))

examples/hosted_mcp/simple.py

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
import argparse
2+
import asyncio
3+
4+
from agents import Agent, HostedMCPTool, Runner
5+
6+
"""This example demonstrates how to use the hosted MCP support in the OpenAI Responses API, with
7+
approvals not required for any tools. You should only use this for trusted MCP servers."""
8+
9+
10+
async def main(verbose: bool, stream: bool):
11+
agent = Agent(
12+
name="Assistant",
13+
tools=[
14+
HostedMCPTool(
15+
tool_config={
16+
"type": "mcp",
17+
"server_label": "gitmcp",
18+
"server_url": "https://gitmcp.io/openai/codex",
19+
"require_approval": "never",
20+
}
21+
)
22+
],
23+
)
24+
25+
if stream:
26+
result = Runner.run_streamed(agent, "Which language is this repo written in?")
27+
async for event in result.stream_events():
28+
if event.type == "run_item_stream_event":
29+
print(f"Got event of type {event.item.__class__.__name__}")
30+
print(f"Done streaming; final result: {result.final_output}")
31+
else:
32+
res = await Runner.run(agent, "Which language is this repo written in?")
33+
print(res.final_output)
34+
# The repository is primarily written in multiple languages, including Rust and TypeScript...
35+
36+
if verbose:
37+
for item in result.new_items:
38+
print(item)
39+
40+
41+
if __name__ == "__main__":
42+
parser = argparse.ArgumentParser()
43+
parser.add_argument("--verbose", action="store_true", default=False)
44+
parser.add_argument("--stream", action="store_true", default=False)
45+
args = parser.parse_args()
46+
47+
asyncio.run(main(args.verbose, args.stream))
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
# MCP Streamable HTTP Example
2+
3+
This example uses a local Streamable HTTP server in [server.py](server.py).
4+
5+
Run the example via:
6+
7+
```
8+
uv run python examples/mcp/streamablehttp_example/main.py
9+
```
10+
11+
## Details
12+
13+
The example uses the `MCPServerStreamableHttp` class from `agents.mcp`. The server runs in a sub-process at `https://localhost:8000/mcp`.
Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
import asyncio
2+
import os
3+
import shutil
4+
import subprocess
5+
import time
6+
from typing import Any
7+
8+
from agents import Agent, Runner, gen_trace_id, trace
9+
from agents.mcp import MCPServer, MCPServerStreamableHttp
10+
from agents.model_settings import ModelSettings
11+
12+
13+
async def run(mcp_server: MCPServer):
14+
agent = Agent(
15+
name="Assistant",
16+
instructions="Use the tools to answer the questions.",
17+
mcp_servers=[mcp_server],
18+
model_settings=ModelSettings(tool_choice="required"),
19+
)
20+
21+
# Use the `add` tool to add two numbers
22+
message = "Add these numbers: 7 and 22."
23+
print(f"Running: {message}")
24+
result = await Runner.run(starting_agent=agent, input=message)
25+
print(result.final_output)
26+
27+
# Run the `get_weather` tool
28+
message = "What's the weather in Tokyo?"
29+
print(f"\n\nRunning: {message}")
30+
result = await Runner.run(starting_agent=agent, input=message)
31+
print(result.final_output)
32+
33+
# Run the `get_secret_word` tool
34+
message = "What's the secret word?"
35+
print(f"\n\nRunning: {message}")
36+
result = await Runner.run(starting_agent=agent, input=message)
37+
print(result.final_output)
38+
39+
40+
async def main():
41+
async with MCPServerStreamableHttp(
42+
name="Streamable HTTP Python Server",
43+
params={
44+
"url": "http://localhost:8000/mcp",
45+
},
46+
) as server:
47+
trace_id = gen_trace_id()
48+
with trace(workflow_name="Streamable HTTP Example", trace_id=trace_id):
49+
print(f"View trace: https://platform.openai.com/traces/trace?trace_id={trace_id}\n")
50+
await run(server)
51+
52+
53+
if __name__ == "__main__":
54+
# Let's make sure the user has uv installed
55+
if not shutil.which("uv"):
56+
raise RuntimeError(
57+
"uv is not installed. Please install it: https://docs.astral.sh/uv/getting-started/installation/"
58+
)
59+
60+
# We'll run the Streamable HTTP server in a subprocess. Usually this would be a remote server, but for this
61+
# demo, we'll run it locally at http://localhost:8000/mcp
62+
process: subprocess.Popen[Any] | None = None
63+
try:
64+
this_dir = os.path.dirname(os.path.abspath(__file__))
65+
server_file = os.path.join(this_dir, "server.py")
66+
67+
print("Starting Streamable HTTP server at http://localhost:8000/mcp ...")
68+
69+
# Run `uv run server.py` to start the Streamable HTTP server
70+
process = subprocess.Popen(["uv", "run", server_file])
71+
# Give it 3 seconds to start
72+
time.sleep(3)
73+
74+
print("Streamable HTTP server started. Running example...\n\n")
75+
except Exception as e:
76+
print(f"Error starting Streamable HTTP server: {e}")
77+
exit(1)
78+
79+
try:
80+
asyncio.run(main())
81+
finally:
82+
if process:
83+
process.terminate()
Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
import random
2+
3+
import requests
4+
from mcp.server.fastmcp import FastMCP
5+
6+
# Create server
7+
mcp = FastMCP("Echo Server")
8+
9+
10+
@mcp.tool()
11+
def add(a: int, b: int) -> int:
12+
"""Add two numbers"""
13+
print(f"[debug-server] add({a}, {b})")
14+
return a + b
15+
16+
17+
@mcp.tool()
18+
def get_secret_word() -> str:
19+
print("[debug-server] get_secret_word()")
20+
return random.choice(["apple", "banana", "cherry"])
21+
22+
23+
@mcp.tool()
24+
def get_current_weather(city: str) -> str:
25+
print(f"[debug-server] get_current_weather({city})")
26+
27+
endpoint = "https://wttr.in"
28+
response = requests.get(f"{endpoint}/{city}")
29+
return response.text
30+
31+
32+
if __name__ == "__main__":
33+
mcp.run(transport="streamable-http")

examples/research_bot/agents/search_agent.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33

44
INSTRUCTIONS = (
55
"You are a research assistant. Given a search term, you search the web for that term and "
6-
"produce a concise summary of the results. The summary must 2-3 paragraphs and less than 300 "
6+
"produce a concise summary of the results. The summary must be 2-3 paragraphs and less than 300 "
77
"words. Capture the main points. Write succinctly, no need to have complete sentences or good "
88
"grammar. This will be consumed by someone synthesizing a report, so its vital you capture the "
99
"essence and ignore any fluff. Do not include any additional commentary other than the summary "

0 commit comments

Comments
 (0)