-
Notifications
You must be signed in to change notification settings - Fork 2k
Realtime docs #1153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Realtime docs #1153
Conversation
@rm-openai I'm telling you, tool calls are not working properly. I've tried everything. |
handoffs=[ | ||
realtime_handoff(billing_agent, tool_description="Transfer to billing support"), | ||
realtime_handoff(technical_agent, tool_description="Transfer to technical support"), | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the difference between just putting in the agent and using "realtime_handoff"? In your web demo you just have "handoffs=[agent]"?
I set up this example and it doesn't reply with the string. Even basic tool calling doesn't work yet. If I just need to be patient, let me know. It says "I'll put that up" and I can see the tool being called, but it never responds with the output.
|
@sibblegp can you create a separate issue with a script to reproduce and I can take a look? |
Will do! It could be you are running new code. If after you publish v2.1 it's still not working, I'll make an issue. |
3. **Start the session** using `await runner.run()` which returns a RealtimeSession. | ||
4. **Send audio or text messages** to the session using `send_audio()` or `send_message()` | ||
5. **Listen for events** by iterating over the session - events include audio output, transcripts, tool calls, handoffs, and errors | ||
6. **Handle interruptions** when users speak over the agent, which automatically stops current audio generation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rm-openai Can you give an example of how you handle interruptions? I don't see any in the demo files made so far. I think you just need to detect when audio starts and then call truncate but I don't see how to do that. Thanks!
Also, neither demo runs for me. I get the same error on both the UI and the non UI demos.
|
Documentation (both written and code)