-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Comparing changes
Open a pull request
base repository: openai/openai-agents-python
base: v0.0.11
head repository: openai/openai-agents-python
compare: v0.0.12
- 15 commits
- 75 files changed
- 4 contributors
Commits on Apr 15, 2025
-
Run CI on all commits, not just ones on main (#521)
Was not running on my stacked PRs.
Configuration menu - View commit details
-
Copy full SHA for ce1abe6 - Browse repository at this point
Copy the full SHA ce1abe6View commit details -
Configuration menu - View commit details
-
Copy full SHA for 80de53e - Browse repository at this point
Copy the full SHA 80de53eView commit details -
Configuration menu - View commit details
-
Copy full SHA for 65cae71 - Browse repository at this point
Copy the full SHA 65cae71View commit details
Commits on Apr 16, 2025
-
Show repo name/data in docs (#525)
Easy linking back to the repo, plus some social proof (stars/forks etc).
Configuration menu - View commit details
-
Copy full SHA for 0faadf7 - Browse repository at this point
Copy the full SHA 0faadf7View commit details -
Configuration menu - View commit details
-
Copy full SHA for bd404e0 - Browse repository at this point
Copy the full SHA bd404e0View commit details -
Configuration menu - View commit details
-
Copy full SHA for 472e8c1 - Browse repository at this point
Copy the full SHA 472e8c1View commit details
Commits on Apr 17, 2025
-
Docs: Switch to o3 model; exclude translated pages from search (#533)
This pull request introduces the following changes: 1. **Exclude translated pages from search**: I explored ways to make the search plugin work with the i18n plugin, but it would require extensive custom JavaScript hacks. So for now, I’m holding off on this work. 2. **Switch from GPT-4.1 to o3 for even better translation quality**: While 4.1 performs well, o3 shows even greater quality for this task, and there’s no reason to avoid using it.
Configuration menu - View commit details
-
Copy full SHA for 5639606 - Browse repository at this point
Copy the full SHA 5639606View commit details
Commits on Apr 21, 2025
-
Configuration menu - View commit details
-
Copy full SHA for 4b8472d - Browse repository at this point
Copy the full SHA 4b8472dView commit details -
Enable non-strict output types (#539)
See #528, some folks are having issues because their output types are not strict-compatible. My approach was: 1. Create `AgentOutputSchemaBase`, which represents the base methods for an output type - the json schema + validation 2. Make the existing `AgentOutputSchema` subclass `AgentOutputSchemaBase` 3. Allow users to pass a `AgentOutputSchemaBase` to `Agent(output_type=...)`
Configuration menu - View commit details
-
Copy full SHA for e3698f3 - Browse repository at this point
Copy the full SHA e3698f3View commit details -
Configuration menu - View commit details
-
Copy full SHA for 616d8e7 - Browse repository at this point
Copy the full SHA 616d8e7View commit details -
Fix visualize graph filename to without extension. (#554)
Only the file name is needed since graphviz's `render()` automatically adds the file extension. Also, unnecessary .gv (.dot) files are output, so the `cleanup=True` option has been modified to prevent them from being saved. Here is a similar modification, but in a different content. - #451
Configuration menu - View commit details
-
Copy full SHA for 0a3dfa0 - Browse repository at this point
Copy the full SHA 0a3dfa0View commit details -
RFC: automatically use litellm if possible (#534)
## Summary This replaces the default model provider with a `MultiProvider`, which has the logic: - if the model name starts with `openai/` or doesn't contain "/", use OpenAI - if the model name starts with `litellm/`, use LiteLLM to use the appropriate model provider. It's also extensible, so users can create their own mappings. I also imagine that if we natively supported Anthropic/Gemini etc, we can add it to MultiProvider to make it work. The goal is that it should be really easy to use any model provider. Today if you pass `model="gpt-4.1"`, it works great. But `model="claude-sonnet-3.7"` doesn't. If we can make it that easy, it's a win for devx. I'm not entirely sure if this is a good idea - is it too magical? Is the API too reliant on litellm? Comments welcome. ## Test plan For now, the example. Will add unit tests if we agree its worth mergin. --------- Co-authored-by: Steven Heidel <steven@heidel.ca>
Configuration menu - View commit details
-
Copy full SHA for a0254b0 - Browse repository at this point
Copy the full SHA a0254b0View commit details -
Configuration menu - View commit details
-
Copy full SHA for 942ba98 - Browse repository at this point
Copy the full SHA 942ba98View commit details -
Pass through organization/project headers to tracing backend, fix spe…
…ech_group enum (#562)
Configuration menu - View commit details
-
Copy full SHA for 2bdf9b7 - Browse repository at this point
Copy the full SHA 2bdf9b7View commit details
Commits on Apr 22, 2025
-
Configuration menu - View commit details
-
Copy full SHA for 83ce49e - Browse repository at this point
Copy the full SHA 83ce49eView commit details
This comparison is taking too long to generate.
Unfortunately it looks like we can’t render this comparison for you right now. It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:
git diff v0.0.11...v0.0.12