-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Retry mechanism for ModelBehaviorError #325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hmm can you share some code? |
This is the exception occurred
The agent is defined as
Run config
I ran the agent with
|
Ah got it. I can add something to fix this. |
@rm-openai hey! having the same question regarding retries and also want to know the proper way of using tenacity.retry with Runner.run_streamed (if possible). Thanks! |
@rm-openai also curious about retries with Runner.run_streamed or what is the proper mechanism. |
Also for rate-limit. I have a flow that might make 50 tool calls and I'm hitting the tokens per minute limit. I would expect it to continue with an exponential retry |
Need this one also |
This would be useful. Using an AzureOpenAI model that supports Structured Outputs (specifically gpt4o-2024-11-20), I've occasionally seen |
@dilwong not related but, I have issue getting AzureOpenAI with Agents to run, I always get the api-version issue 2025-01-01-preview or 404, yet when I try Azure OpenAi alone it works, once I set it with the agent i get the issue |
@LatVAlY Probably not the place for this, but here is a minimal example:
|
Uh oh!
There was an error while loading. Please reload this page.
I have extensively use this framework during the recent week. It performs pretty well, except on rare circumstances, the LLM will attempt to call a nonexistent tool, which crashed a whole 10-min agent run. Can you implement a retry mechanism allowing re-execute the errored LLM call to have a chance to recover from the it?
The text was updated successfully, but these errors were encountered: