-
-
Notifications
You must be signed in to change notification settings - Fork 34.6k
Add GPT-5 support #150281
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add GPT-5 support #150281
Conversation
Hey there @balloob, mind taking a look at this pull request as it has been labeled with an integration ( Code owner commandsCode owners of
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds support for GPT-5 models to the OpenAI Conversation integration, extending the existing reasoning model functionality to include new GPT-5 specific features. The changes introduce verbosity control for GPT-5 models and update configuration options to accommodate the new model family.
Key Changes
- Adds GPT-5 model support with verbosity configuration option
- Updates test data to include GPT-5 model references and new configuration parameters
- Extends reasoning model logic to support both "o" and "gpt-5" model prefixes
Reviewed Changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated no comments.
Show a summary per file
File | Description |
---|---|
const.py |
Adds CONF_VERBOSITY constant and RECOMMENDED_VERBOSITY default value |
config_flow.py |
Updates configuration flow to handle GPT-5 models with verbosity settings and different reasoning effort options |
entity.py |
Extends model handling logic to support GPT-5 models with verbosity and reasoning parameters |
strings.json |
Adds UI strings for "minimal" reasoning effort option and verbosity selector |
test_config_flow.py |
Updates test cases to include GPT-5 models and new configuration parameters |
conftest.py |
Updates test configuration to use GPT-5 model reference |
reasoning models do not support temperature nor top P. Maybe this PR could be used to move these settings to model-specific only for gpt4 models? If you use non-default temp or topp with oX or gpt5 models, you will only get errors from OpenAI API |
You are right, but let's get this out of scope of this PR. |
@@ -346,14 +348,18 @@ async def _async_handle_chat_log( | |||
if tools: | |||
model_args["tools"] = tools | |||
|
|||
if model_args["model"].startswith("o"): | |||
if model_args["model"].startswith(("o", "gpt-5")): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if reasoning will be default included in future models and we should consider negating the check in the future
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most probably.
Proposed change
Add GPT-5 support
Type of change
Additional information
Checklist
ruff format homeassistant tests
)If user exposed functionality or configuration variables are added/changed:
If the code communicates with devices, web services, or third-party tools:
Updated and included derived files by running:
python3 -m script.hassfest
.requirements_all.txt
.Updated by running
python3 -m script.gen_requirements_all
.To help with the load of incoming pull requests: