You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have you read the custom model provider docs, including the 'Common issues' section?Model provider docs: Yes
Have you searched for related issues? Others may have faced similar issues.: Yes
Describe the question
When the agent attempts to submit a toolcall result via the LitellmModel abstraction, a failure occurs (400 bad request) if reasoning is enabled. I've tested this in particular on Anthropic claude 3.7 with reasoning effort set to high.
Debug information
Agents SDK version: v0.0.14
Litellm 1.67.4
Python version: 3.13
Repro steps
Here is a modified version of the litellm_provider.py example. The only changes i've made, aside from hardcoding the model, is to pass a reasoning effort. This value correctly gets sent to the underlying litellm and enables reasoning on Claude, however, after Claude decides to use the get_weather tool, the tool submission message chain loses all information related to the thinking blocks. This causes the anthropic API to return a 400
litellm.exceptions.BadRequestError: litellm.BadRequestError: AnthropicException - {"type":"error","error":{"type":"invalid_request_error","message":"messages.1.content.0.type: Expected `thinking` or `redacted_thinking`, but found `tool_use`. When `thinking` is enabled, a final `assistant` message must start with a thinking block (preceeding the lastmost set of `tool_use` and `tool_result` blocks). We recommend you include thinking blocks from previous turns. To avoid this requirement, disable `thinking`. Please consult our documentation at https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking"}}
Example:
importasynciofromagentsimportAgent, Runner, function_tool, set_tracing_disabled, ModelSettingsfromagents.extensions.models.litellm_modelimportLitellmModelfromopenai.types.shared.reasoningimportReasoning@function_tooldefget_weather(city: str):
print(f"[debug] getting weather for {city}")
returnf"The weather in {city} is sunny."asyncdefmain():
agent=Agent(
name="Assistant",
instructions="You only respond in haikus.",
model=LitellmModel(model="anthropic/claude-3-7-sonnet-20250219"),
tools=[get_weather],
model_settings=ModelSettings(reasoning=Reasoning(effort="high")),
)
result=awaitRunner.run(agent, "What's the weather in Tokyo?")
print(result.final_output)
if__name__=="__main__":
asyncio.run(main())
Expected behavior
I would expect the agent to return the weather in tokyo, without failure.
When using litellm directly, i'm able to accomplish tool calling with a reasoning model.
This issue unfortunately makes it impossible to properly use non openai reasoning models with agents.
Cause
I believe the cause of this error is that the message conversion steps remove the model provider specific details from the message chain, such as thinking blocks. These are maintained on the litellm message type but are lost during the following conversion steps done in the LitellmModel:
input item -> chat completion -> lite llm
lite llm -> chat completion -> output item
I think in order to properly support other model providers via Litellm, there needs to be a way to preserve model specific message properties between the various message models.
The text was updated successfully, but these errors were encountered:
I have the same issue using the openai client for Javascript with Anthropic Sonnet 4
RROR BadRequestError: 400 messages.7.content.0.type: Expected `thinking` or `redacted_thinking`, but found `tool_use`. When `thinking` is enabled, a final `assistant` message must start with a thinking block (preceeding the lastmost set of `tool_use` and `tool_result` blocks). We recommend you include thinking blocks from previous turns. To avoid this requirement, disable `thinking`. Please consult our documentation at https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking
at APIError.generate (C:\_svn\_praxis\code\pria_ui\pria-ui-v22\pria-ui-v22\node_modules\openai\error.js:45:20)
at OpenAI.makeStatusError (C:\_svn\_praxis\code\pria_ui\pria-ui-v22\pria-ui-v22\node_modules\openai\core.js:302:33)
at OpenAI.makeRequest (C:\_svn\_praxis\code\pria_ui\pria-ui-v22\pria-ui-v22\node_modules\openai\core.js:346:30)
at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
at async doStreamConversation (C:\_svn\_praxis\code\pria_ui\pria-ui-v22\pria-ui-v22\routes\middlewares\openai.js:710:28)
at async doStreamConversation (C:\_svn\_praxis\code\pria_ui\pria-ui-v22\pria-ui-v22\routes\middlewares\openai.js:960:29)
at async Object.streamConversation (C:\_svn\_praxis\code\pria_ui\pria-ui-v22\pria-ui-v22\routes\middlewares\openai.js:588:5)
In bedrock, when using models like anthropic with reasoning, you can get the thinking text block from the LLM such as
item?.contentBlockDelta?.delta?.reasoningContent
{
text: 'i am thinking...`,
signature: 'xxxxx'
}
then prior to making the tool_call, you add the thinking blocks back as an assistant message
Please read this first
Describe the question
When the agent attempts to submit a toolcall result via the LitellmModel abstraction, a failure occurs (400 bad request) if reasoning is enabled. I've tested this in particular on Anthropic claude 3.7 with reasoning effort set to
high
.Debug information
v0.0.14
Repro steps
Here is a modified version of the litellm_provider.py example. The only changes i've made, aside from hardcoding the model, is to pass a reasoning effort. This value correctly gets sent to the underlying litellm and enables reasoning on Claude, however, after Claude decides to use the get_weather tool, the tool submission message chain loses all information related to the thinking blocks. This causes the anthropic API to return a 400
Example:
Expected behavior
I would expect the agent to return the weather in tokyo, without failure.
When using litellm directly, i'm able to accomplish tool calling with a reasoning model.
This issue unfortunately makes it impossible to properly use non openai reasoning models with agents.
Cause
I believe the cause of this error is that the message conversion steps remove the model provider specific details from the message chain, such as thinking blocks. These are maintained on the litellm message type but are lost during the following conversion steps done in the LitellmModel:
input item -> chat completion -> lite llm
lite llm -> chat completion -> output item
I think in order to properly support other model providers via Litellm, there needs to be a way to preserve model specific message properties between the various message models.
The text was updated successfully, but these errors were encountered: