Skip to content

GPT-5 + tool calls: Error code: 400 - Item 'rs_...' of type 'reasoning' was provided without its required following item. #1660

@MartinXPN

Description

@MartinXPN

Describe the bug

After switching to GPT-5, I've started experiencing many errors of the following form:

openai.BadRequestError: Error code: 400 - {'error': {'message': "Item 'rs_xxx' of type 'reasoning' was provided without its required following item.", 'type': 'invalid_request_error', 'param': 'input', 'code': None}}

This happens when I include 3 tools available to the agent:

  • function_tool
  • WebSearchTool
  • CodeInterpreterTool

Debug information

  • Agents SDK version: 0.2.10
  • Python version 3.12

Repro steps

Here is a complete example code that you can run to see the issue. It usually takes 1-2 tool calls to crash:

  • Install openai-agents and nltk
  • Add OPENAI_API_KEY as an environment variable
  • Run the following code:
import asyncio
import random
from textwrap import dedent

from agents import Agent, CodeInterpreterTool, ModelSettings, Runner, function_tool, trace, WebSearchTool
from dotenv import load_dotenv
from openai.types import Reasoning


# environment variables: OPENAI_API_KEY
load_dotenv()


@function_tool
async def retrieve_doc(query: str) -> str:
    """
    Retrieves a document for a specific iteration of the reasoning process.

    Parameters
    ----------
    query: str
        A detailed description of what document to retrieve.
        It should be based on all the previous documents retrieved.
        The query should be as specific as possible to get the most relevant document.

    Returns
    -------
    str
        The document for the given iteration
    """
    from nltk.corpus import gutenberg

    all_texts = [gutenberg.raw(fid) for fid in gutenberg.fileids()]
    iteration = random.randint(0, len(all_texts) - 1)
    limit = random.randint(10, 50_000)
    print(f"retrieve_doc({len(query)}) -> {iteration}/{len(all_texts)} => {limit}")

    text = all_texts[iteration]
    text = text[:limit]
    return text


agent = Agent(
    name="Agent",
    model="gpt-5",
    model_settings=ModelSettings(
        truncation="auto",
        parallel_tool_calls=True,
        verbosity="medium",
        reasoning=Reasoning(effort="medium"),
    ),
    instructions=dedent(
        """
        You are an AI Agent that executes tasks.
        Given a task description, you follow the instructions and complete the task just like it's described.
        """
    ).strip(),
    tools=[
        retrieve_doc,
        WebSearchTool(),
        CodeInterpreterTool(
            tool_config={
                "type": "code_interpreter",
                "container": {"type": "auto"},
            }
        ),
    ],
)


async def main():
    # Test the retrieve_doc tool
    # ctx = RunContextWrapper(context=None)
    # args_json = json.dumps(
    #     {
    #         "iteration": 1,
    #     }
    # )
    # print(await retrieve_doc.on_invoke_tool(ctx, args_json))

    with trace("reasoning-issue-reproduction"):
        result = await Runner.run(
            agent,
            input=dedent(
                """
                Use the `retrieve_doc` tool to get 50 documents.
                - Make sure to use the content of the previously retrieved documents to form the query for the next document.
                - If you come across any numbers, use the code interpreter tool to perform calculations on those numbers.
                - Use those numbers and analysis to perform the next queries.
                - Verify your numbers with the web search tool if needed.
                Make sure to call the `retrieve_doc` exactly 50 times.
                DO NOT stop until you have retrieved all 50 documents no matter how long the reasoning chain is.
                The final response should contain the summary for ALL the documents retrieved.
                After retrieving all 50 documents, summarize the content of all the documents in a concise manner.
                """
            ).strip(),
            max_turns=100,
        )
    print(result.final_output)


if __name__ == "__main__":
    import nltk
    nltk.download("gutenberg")
    asyncio.run(main())

Expected behavior

It shouldn't crash.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions