-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Insights: openai/openai-agents-python
Overview
Could not load contribution data
Please try again later
3 Releases published by 2 people
-
v0.0.14 v0.0.14
published
Apr 30, 2025 -
v0.0.15
published
May 15, 2025 -
v0.0.16
published
May 21, 2025
20 Pull requests merged by 10 people
-
Fix Gemini API content filter handling
#746 merged
May 23, 2025 -
Update MCP and tool docs
#736 merged
May 23, 2025 -
Fix visualization recursion with cycle detection
#737 merged
May 23, 2025 -
fix Gemini token validation issue with LiteLLM
#735 merged
May 21, 2025 -
v0.0.16
#733 merged
May 21, 2025 -
Add support for local shell, image generator, code interpreter tools
#732 merged
May 21, 2025 -
Hosted MCP support
#731 merged
May 21, 2025 -
Upgrade openAI sdk version
#730 merged
May 21, 2025 -
Add ID generation methods to TraceProvider
#729 merged
May 21, 2025 -
DRAFT: Dev/add usage details to Usage class
#726 merged
May 20, 2025 -
Add Galileo to external tracing processors list
#662 merged
May 19, 2025 -
Added mcp 'instructions' attribute to the server
#706 merged
May 18, 2025 -
Create AGENTS.md
#707 merged
May 18, 2025 -
v0.0.15
#701 merged
May 15, 2025 -
feat: Streamable HTTP support
#643 merged
May 14, 2025 -
Update search_agent.py
#677 merged
May 14, 2025 -
feat: pass extra_body through to LiteLLM acompletion
#638 merged
May 14, 2025 -
Fixed a bug for "detail" attribute in input image
#685 merged
May 14, 2025 -
0.0.14 release
#635 merged
Apr 30, 2025 -
Update litellm version
#626 merged
Apr 29, 2025
30 Pull requests opened by 24 people
-
[MCP][Utils] Add support for FastMCP processing
#631 opened
Apr 30, 2025 -
fix: add ensure_ascii=False to json.dumps for correct Unicode output
#639 opened
May 2, 2025 -
Examples: Fixed agent_patterns/streaming guardrails
#648 opened
May 5, 2025 -
Fix typos in documentation and event naming across multiple files
#651 opened
May 6, 2025 -
feat: Add support for image function tools
#654 opened
May 6, 2025 -
Added support for gpt4o-realtime models for Speect to Speech interactions
#659 opened
May 7, 2025 -
Fixed Python syntax
#665 opened
May 8, 2025 -
Use `max_completion_tokens` param for OpenAI Chat Completion API
#679 opened
May 11, 2025 -
Added response cost in the Usage
#682 opened
May 12, 2025 -
Feature/message_filter
#687 opened
May 12, 2025 -
docs: add DeepWiki badge for AI-powered project documentation
#689 opened
May 13, 2025 -
docs: Fix Guardrail intro to include output validation
#697 opened
May 15, 2025 -
Fix incorrect and ambiguous guardrail documentation: Input/Output Guardrails and agent sequencing
#699 opened
May 15, 2025 -
docs: Fix context example syntax and improve clarity
#700 opened
May 15, 2025 -
docs: Enhance fetch_user_age tool: include UID and correct output format
#702 opened
May 15, 2025 -
Create The moon dev
#709 opened
May 17, 2025 -
visualize the complete Agent Loop with an interactive UML
#713 opened
May 18, 2025 -
Make Runner an abstract base class
#720 opened
May 20, 2025 -
docs: fix output guardrail step description
#724 opened
May 20, 2025 -
add get_tool_call_name method to extract tool name from ToolCallItem
#728 opened
May 21, 2025 -
Jules wip 10459775938551966727
#739 opened
May 22, 2025 -
Added RunErrorDetails object for MaxTurnsExceeded exception
#743 opened
May 22, 2025 -
Attach partial run data on errors
#747 opened
May 23, 2025 -
remove "can can" with single can
#751 opened
May 24, 2025 -
Add Sessions for Automatic Conversation History Management
#752 opened
May 24, 2025 -
Docs: Fix Guardrail definition to include both input and output validations
#756 opened
May 25, 2025 -
Fix typo: Replace 'two' with 'three' in /docs/mcp.md
#757 opened
May 25, 2025 -
Changed a mistake
#759 opened
May 26, 2025 -
Fix and Document `parallel_tool_calls` Attribute in ModelSettings
#763 opened
May 26, 2025 -
Added support for passing tool_call_id via the RunContextWrapper
#766 opened
May 27, 2025
52 Issues closed by 14 people
-
Ba
#755 closed
May 27, 2025 -
MCP Stdio- Hosted tools are not supported with the ChatCompletions API
#761 closed
May 26, 2025 -
How to pass hardcoded dynamic messages as agent's responses in the chat history ?
#695 closed
May 26, 2025 -
litellm proxy with openai-agents
#696 closed
May 26, 2025 -
How to pass reasoning=generate_summary for ComputerTool Agent?
#698 closed
May 26, 2025 -
Multiple handoffs requested Error tracing platform
#694 closed
May 25, 2025 -
Troubleshooting Agent Handoff in Multi-Agent Workflow
#681 closed
May 24, 2025 -
Infinite recursion in src/agents/extensions/visualization.py due to circular references
#668 closed
May 23, 2025 -
Streaming chain of thoughts
#721 closed
May 22, 2025 -
When using Japanese in AzureOpenAI, answers may not be displayed
#649 closed
May 22, 2025 -
Threads api
#674 closed
May 22, 2025 -
LiteLLM with Gemini token issue
#734 closed
May 21, 2025 -
Passing images as input to an agent
#727 closed
May 21, 2025 -
llm.txt?
#670 closed
May 20, 2025 -
How to make a LiteLLM models run in Reasoning mode
#671 closed
May 20, 2025 -
Add an example for telephony voice agent
#672 closed
May 20, 2025 -
MCP Server instructions are currently ignored
#704 closed
May 19, 2025 -
Does the formatted output (Agent.output_type) require model support?
#664 closed
May 19, 2025 -
Question about streaming for subagents and tools, and tool hallucinations
#667 closed
May 19, 2025 -
Is there a way to access reasoning_content when calling Runner.run?
#645 closed
May 18, 2025 -
Providing a pydantic model instead of docstring for tool parameters.
#646 closed
May 18, 2025 -
Creating Agents Dynamically
#641 closed
May 16, 2025 -
How to add messages to the conversation history
#642 closed
May 16, 2025 -
Braicool
#675 closed
May 12, 2025 -
Does StopAtTools returns tool result directly to user instead of to LLM?
#632 closed
May 12, 2025 -
additionalProperties should not be set for object types
#608 closed
May 11, 2025 -
Handoff Agent and Tool Call Not Triggering Reliably in Multi-Agent Setup
#617 closed
May 11, 2025 -
How to use on_handoff content in the agent
#627 closed
May 11, 2025 -
What is the role of ReasoningItem
#480 closed
May 10, 2025 -
example streaming events to the client
#653 closed
May 10, 2025 -
Triage agent can not delegate task to handoff agent
#575 closed
May 9, 2025 -
Agent gets stuck 'in-progress'
#647 closed
May 8, 2025 -
How to use custom LLM Gateway having JWT authetication
#652 closed
May 7, 2025 -
Integration of deterministic conversations and other agents
#603 closed
May 6, 2025 -
Are MCPServer and MCPServerSse clients?
#640 closed
May 5, 2025 -
how to use Code Interpreter or Image Output in OpenAI Agents SDK
#360 closed
May 5, 2025 -
## Custom Model Provider Not Working
#485 closed
May 5, 2025 -
function call can not get call_id
#559 closed
May 4, 2025 -
How to use llm outputs in the on_handoff function
#567 closed
May 4, 2025 -
Tools should not be exeucted until all input guardrails have completed
#624 closed
May 2, 2025 -
Files in the input user prompt
#557 closed
May 2, 2025 -
from agents.extensions.models.litellm_model import LitellmModel
#621 closed
May 1, 2025 -
Accessing reasoning tokens of another llm model in agents sdk
#462 closed
May 1, 2025 -
[Bug]: ModuleNotFoundError: No module named 'enterprise' When Using litellm==1.48.1 in Google Colab
#614 closed
Apr 30, 2025 -
ModuleNotFoundError: No module named 'enterprise' #10353
#613 closed
Apr 30, 2025 -
Add HTTP (non-stdio) MCP server support to Agents SDK
#616 closed
Apr 29, 2025 -
OpenAI Agents SDK unable to contact local endpoint hosted by Ollama / LM Studio
#625 closed
Apr 29, 2025 -
AWS Bedrock via LiteLLM
#620 closed
Apr 29, 2025 -
how to print Mcp tools print
#615 closed
Apr 28, 2025
54 Issues opened by 51 people
-
Only 1 handoff getting called no matter what
#771 opened
May 28, 2025 -
function_tool calling - agents orchertratins
#769 opened
May 27, 2025 -
Agents Reproducibility - seed ? top_p=0?
#768 opened
May 27, 2025 -
How to dynamically add/remove tools in a `tool_use_behavior="run_llm_again"`-loop
#767 opened
May 27, 2025 -
Tool calling with LiteLLM and thinking models fail
#765 opened
May 27, 2025 -
How to implement handoff between parents and children agents in different paths
#764 opened
May 27, 2025 -
parallel_tool_calls default is True in the client, despite docs saying False
#762 opened
May 26, 2025 -
ValidationError` from `InputTokensDetails` when using `LitellmModel` with `None` cached\_tokens
#760 opened
May 26, 2025 -
LiteLLM + Gemini 2.5 Pro: cached_tokens=None crashes Agents SDK with Pydantic int-validation error
#758 opened
May 26, 2025 -
Streaming Output issue got unnecesory text in output
#753 opened
May 24, 2025 -
support for bedrock prompt caching
#750 opened
May 24, 2025 -
Will session memory be implemented in the future?
#748 opened
May 23, 2025 -
Add Session Memory
#745 opened
May 22, 2025 -
Issue with Multi-turn Handoff and Slot Filling
#741 opened
May 22, 2025 -
Can't implement guardrails with litellm
#740 opened
May 22, 2025 -
How to integrate audio input in agent sdk? (not realtime or voice pipeline, e.g., gpt-4o audio)
#738 opened
May 22, 2025 -
How to use openai-agents to call text2image API?
#725 opened
May 20, 2025 -
Unable to use tool calling when using Llama 4 scout via LiteLLM Proxy
#723 opened
May 20, 2025 -
How would I handoff a non-reasoning model with tool calls to a reasoning model?
#722 opened
May 20, 2025 -
Make intermediate results available when `MaxTurnExceededException` is thrown
#719 opened
May 20, 2025 -
Very high response times at random during hand-offs
#717 opened
May 19, 2025 -
Allow agent to return logprobs
#715 opened
May 19, 2025 -
How to pass the control back to triage agent
#714 opened
May 19, 2025 -
PyCharm Fails to Recognize Agent Constructor Arguments (v0.0.15)
#712 opened
May 18, 2025 -
Tracing client error 400 (MCP)
#710 opened
May 17, 2025 -
Make FuncTool and @function_tool decorated function callable
#708 opened
May 16, 2025 -
Custom-LLM <think> tag handling
#703 opened
May 15, 2025 -
MCP server restart cause Agent to fail
#693 opened
May 14, 2025 -
Feature Request: Support streaming tool call outputs
#692 opened
May 14, 2025 -
ImportError: cannot import name 'MCPServerStdio' from 'agents.mcp'
#691 opened
May 14, 2025 -
Please add time travel
#688 opened
May 13, 2025 -
Agent attempts to use non-existing tool
#686 opened
May 12, 2025 -
Feature Request: Allow Separate Models for Tool Execution and Final Response in OpenAI Agent SDK
#684 opened
May 12, 2025 -
Add response cost in the Usage
#683 opened
May 12, 2025 -
Unable to use reasoning models with tool calls using LitellmModel
#678 opened
May 11, 2025 -
How to provide resources with a MCP server?
#676 opened
May 11, 2025 -
Error code: 400 "No tool output found for function call"
#673 opened
May 10, 2025 -
from agents.extensions.models.litellm_model import LitellmModel
#666 opened
May 8, 2025 -
Custom model provider ignored when using agents as tools
#663 opened
May 7, 2025 -
First-class streaming tool output
#661 opened
May 7, 2025 -
the same isinstance(output, ResponseFunctionToolCall) check twice in "_run_impl.py "
#658 opened
May 7, 2025 -
OAuth support for MCPServerSse
#657 opened
May 7, 2025 -
Function calling fails on “application/json” MIME type with the latest Gemini models
#656 opened
May 6, 2025 -
Input format in agent as tool
#655 opened
May 6, 2025 -
Human-In-The-Loop Architecture should be implemented on top priority!
#636 opened
May 1, 2025 -
no attribute error occurs while calling MCP
#630 opened
Apr 30, 2025 -
Intent Classifier Support
#628 opened
Apr 29, 2025 -
on_agent_start hook should be more performant
#623 opened
Apr 29, 2025 -
Can we use agent.run instead of Runner.run(starting_agent=agent)
#622 opened
Apr 29, 2025 -
Resource tracker warning (leaked semaphores) with MCPServerStdio
#618 opened
Apr 28, 2025
33 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
Add a new GH Actions job to automatically update translated document pagse
#598 commented on
May 21, 2025 • 1 new comment -
'handoffs' and 'agent.as_tool' have different performances.
#224 commented on
Apr 29, 2025 • 0 new comments -
Ordering of events in Runner.run_streamed is incorrect
#583 commented on
May 16, 2025 • 0 new comments -
Question about Tool Call Streaming
#326 commented on
May 16, 2025 • 0 new comments -
Duplicate tool names across MCP servers cause errors
#464 commented on
May 17, 2025 • 0 new comments -
Why does the Computer protocol not have the goto method?
#547 commented on
May 19, 2025 • 0 new comments -
Proper way of managing large context window for ComputerTool
#111 commented on
May 20, 2025 • 0 new comments -
Support for OpenAI agents sdk with Javascript/Typescript
#240 commented on
May 21, 2025 • 0 new comments -
How to make the conversation finally back to the MAIN AGENT
#527 commented on
May 22, 2025 • 0 new comments -
Support for MCP prompts and resources
#544 commented on
May 22, 2025 • 0 new comments -
Add Support for Image Return in Agent Tools
#341 commented on
May 24, 2025 • 0 new comments -
Cannot get the last tool_call_output event in stream_events when MaxTurnsExceeded
#526 commented on
May 26, 2025 • 0 new comments -
Enhance `on_tool_start` Hook to Include Tool Call Arguments
#252 commented on
May 26, 2025 • 0 new comments -
Add tool call parameters for `on_tool_start` hook
#253 commented on
May 23, 2025 • 0 new comments -
add reasoning content to ChatCompletions
#494 commented on
May 5, 2025 • 0 new comments -
Added cached_tokens to the usage monitoring.
#555 commented on
May 4, 2025 • 0 new comments -
Make input/new items available in the run context
#572 commented on
May 13, 2025 • 0 new comments -
Support For CodeAct In The Future?
#383 commented on
Apr 29, 2025 • 0 new comments -
Random transcript gets printed/generated when talking to the voice agent implemented using "VoicePipline" . Eg - "Transcription: Kurs." Mind you there is no background noise.
#368 commented on
Apr 29, 2025 • 0 new comments -
invalid_request_error when using "chat_completions" with triage agent (gemini -> any other model)
#237 commented on
May 1, 2025 • 0 new comments -
Add reasoning support for custom models.
#492 commented on
May 1, 2025 • 0 new comments -
human-in-the-loop
#378 commented on
May 5, 2025 • 0 new comments -
Tool Calling Running in Loop Until Max-Turn
#191 commented on
May 5, 2025 • 0 new comments -
Streamed Voice Agent Demo - Multiple Performance Issues
#301 commented on
May 6, 2025 • 0 new comments -
Add Intro message function for VoicePipeline
#488 commented on
May 6, 2025 • 0 new comments -
Timeout after 300 seconds with any error message. Could it be rate limiting?
#511 commented on
May 7, 2025 • 0 new comments -
Reasoning model items provide to General model
#569 commented on
May 8, 2025 • 0 new comments -
Retry mechanism for ModelBehaviorError
#325 commented on
May 8, 2025 • 0 new comments -
Missing Handling for `delta.reasoning_content` in `agents.models.chatcmpl_stream_handler.ChatCmplStreamHandler.handle_stream`
#578 commented on
May 9, 2025 • 0 new comments -
Add HTTP Streamable support for MCP's
#600 commented on
May 9, 2025 • 0 new comments -
Websocket streaming audio in realtime from client
#536 commented on
May 12, 2025 • 0 new comments -
History Cleaning
#545 commented on
May 12, 2025 • 0 new comments -
is there a way to block the handoff to an agent based on a custom logic ?
#585 commented on
May 15, 2025 • 0 new comments