-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Chain-of-thought and structured output #800
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@akira108 I might be missing something, but why can't you do this:
The reasoning model should automatically call tools in it's CoT, and keep going until it produces an output of that type. |
Thanks for the reply! I’d love to use the reasoning model, but due to cost considerations, I’m trying to make it work with gpt-4.1. Do you have any suggestions for achieving similar behavior with gpt-4.1? |
Oh gotcha, your approach should work for that. Note that because you're prompting the agent, it might be finicky:
You could also try
output:
|
Thanks so much — that really helps! I didn’t realize that even without specifying an output_type, final_output would take the return type of the tool. Appreciate the heads-up on the two pitfalls when prompting gpt-4.1, and I’ll definitely give o4-mini a try too! |
Please read this first
Question
I'd like to enforce LLM to chain-of-thought and step by step tool calls and ultimately want to return a structured output built with a Pydantic model. To achieve that, I'm using StopAtTools and stop when structured output is built.
where
Today I can make this work by calling run_streamed, inspecting each interim tool-call result, and returning when the build_output call of the wanted type appears.
Is there a way to achieve the same thing without using run_streamed?
The text was updated successfully, but these errors were encountered: