-
Notifications
You must be signed in to change notification settings - Fork 1.6k
How would I handoff a non-reasoning model with tool calls to a reasoning model? #722
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Would you mind sharing a quick code snippet that reproduces this? Looking into it |
Definitely; the following will give the first error. import asyncio, random
from dataclasses import replace
from agents import Agent, Runner, function_tool, handoff
from agents.handoffs import HandoffInputData
from agents import MessageOutputItem, ToolCallOutputItem, ToolCallItem, ReasoningItem
from openai.types.responses import ResponseOutputMessage, ResponseOutputText
# ── filter that nukes reasoning + fn‑call, keeps content as text ─────────────
def strip_reasoning_bundle(data: HandoffInputData) -> HandoffInputData:
def transform(seq):
cleaned = []
for item in seq:
if isinstance(item, ReasoningItem):
continue
cleaned.append(item)
return cleaned
return replace(
data,
pre_handoff_items = transform(data.pre_handoff_items),
new_items = transform(data.new_items),
)
# ── tools ────────────────────────────────────────────────────────────────────
@function_tool
def make_haiku_about_haikus() -> str:
return "\n".join([
"Seventeen small breaths,",
"A world folded into three,",
"Haiku makes haiku."
])
@function_tool
def choose_random_word(words: list[str]) -> str:
return random.choice(words)
# ── agent 3 ──────────────────────────────────────────────────────────────────
final_haiku_agent = Agent(
name="Final‑Haikuist",
model="o3",
instructions=("The prior agent will supply ONE word. "
"Write a 5‑7‑5 haiku containing that word exactly once."),
)
# ── agent 2 (general model) ─────────────────────────────────────────────────
word_picker_agent = Agent(
name="Word‑Picker",
model="gpt-4.1",
instructions=("Call `choose_random_word` on the list you receive, then hand off "
"to Final‑Haikuist; add no extra text."),
tools=[choose_random_word],
handoffs=[handoff(final_haiku_agent)],
)
# ── agent 1 ──────────────────────────────────────────────────────────────────
intro_haiku_agent = Agent(
name="Intro‑Haikuist",
model="o3",
instructions=("Call `make_haiku_about_haikus`, then hand off to Word‑Picker."),
tools=[make_haiku_about_haikus],
# filter strips reasoning, fn‑call, result bundle just before the hand‑off
handoffs=[handoff(word_picker_agent, input_filter=strip_reasoning_bundle)],
)
# ── driver ───────────────────────────────────────────────────────────────────
WORDS = ["moon", "blossom", "mountain", "breeze", "river"]
async def main() -> None:
run = await Runner.run(intro_haiku_agent,
f"The candidate words are: {WORDS}")
for t in run.turns:
print(f"\n— {t.agent.name} —\n{t.text}")
if __name__ == "__main__":
asyncio.run(main()) Since I posted, I also tried removing the function calls and converting the function call result to plain text, to pass it to the reasoning model. That also gives me a similar error as trying to add custom reasoning:
|
This issue is stale because it has been open for 7 days with no activity. |
Not stale girlie pop, I am checking this every day. |
Seconded, I am also facing this issue: it is somehow impossible to have a multi-turn conversation with the agent doing |
Thanks - lost track of this, but going to try and fix today. |
deployed a fix, so you shouldn't see this error any more
Let me know if this resolves things or if there's more to be done! |
@rm-openai thank you. Just to check, does this change the behavior where non-reasoning models like Just to make sure I understand: This is the same behavior in playground, where if one switched from, say, |
That's right. Reasoning items will be ignored if passed to gpt-4.1, instead of raising an error. |
@rm-openai Getting a 500; same snippet ^
|
@maininformer ah sorry about that. Just to confirm could you run once more and just make sure you keep getting a 500 error? |
@rm-openai Yeah I ran this a couple of times before posting, but I just ran again 3 more times just to be sure. I do see you guys are having a couple of disruptions but unsure if it's related. |
Uh oh!
There was an error while loading. Please reload this page.
Question
I have a non-reasoning model,
gpt-4.1
, that does some tool calls, and then hands off to a reasoning model,o3
.I am seeing that the server wants reasoning items with a tool call.
I tried providing custom reasoning items during hand off using hooks, but the reasoning Id is validated; that will not work. Tried leaving the Id blank so maybe the backend will create it, also failed.
I tried removing the tool call items, but then the server says no tool call results without a tool call item.
I do need the tool call results to be present in the context so
o3
knows what happened. What do you suggest?P.S. Switching from
gpt-4.1
ando3
works great in the playground. Fromo3
togpt-4.1
it does remove reasoning items, yes, but in the reverse, nothing seems to be the problem.Many thanks.
The text was updated successfully, but these errors were encountered: