-
Notifications
You must be signed in to change notification settings - Fork 7.7k
Add missing reasoning_effort parameter support for OpenAI GPT-5 models #7054
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add missing reasoning_effort parameter support for OpenAI GPT-5 models #7054
Conversation
Co-authored-by: victordibia <[email protected]>
Co-authored-by: victordibia <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR Feedback
with the introduction of GPT-5 the reasoning effort minimal
is also supported. This is missing in the PR.
See:
https://platform.openai.com/docs/api-reference/responses/create#responses-create-reasoning-effort
OpenAI uses this as ReasoningEffort type
ReasoningEffort: TypeAlias = Optional[Literal["minimal", "low", "medium", "high"]]
reasoning_effort: Optional[ReasoningEffort]
"""
Constrains effort on reasoning for
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.
"""
Additional feature request
From migration guide:
https://platform.openai.com/docs/guides/latest-model#migration-guidance
Reasoning depth: reasoning: { effort: "minimal" | "low" | "medium" | "high" }
Output verbosity: text: { verbosity: "low" | "medium" | "high" }
Output length: max_output_tokens
Could verbosity perhaps also be added?
https://platform.openai.com/docs/api-reference/responses/create#responses-create-text-verbosity
verbosity: Optional[Literal["low", "medium", "high"]]
"""Constrains the verbosity of the model's response.
Lower values will result in more concise responses, while higher values will
result in more verbose responses. Currently supported values are `low`,
`medium`, and `high`.
"""
python/packages/autogen-ext/src/autogen_ext/models/openai/config/__init__.py
Outdated
Show resolved
Hide resolved
python/packages/autogen-ext/src/autogen_ext/models/openai/config/__init__.py
Outdated
Show resolved
Hide resolved
@copilot please address the feedbacks |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #7054 +/- ##
==========================================
+ Coverage 81.05% 81.06% +0.01%
==========================================
Files 238 239 +1
Lines 18302 18333 +31
==========================================
+ Hits 14834 14861 +27
- Misses 3468 3472 +4
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Co-authored-by: ekzhu <[email protected]>
Co-authored-by: Maurice Bailly <[email protected]>
Co-authored-by: Maurice Bailly <[email protected]>
python/packages/autogen-ext/src/autogen_ext/models/openai/config/__init__.py
Show resolved
Hide resolved
@copilot fix pyright errors in autogen-ext. just ignore if it's private usage |
…s for private usage Co-authored-by: ekzhu <[email protected]>
Fixed all pyright errors in autogen-ext by adding appropriate ignore comments ( |
Adding support for reasoning_effort parameter in OpenAI GPT-5 model configurations
✅ ISSUE RESOLVED - Complete Fix Implemented
The issue is that the
reasoning_effort
parameter for GPT-5 models was being filtered out during configuration processing. This has been completely resolved.Root Cause Analysis
reasoning_effort
as a valid parametercreate_kwargs
includesreasoning_effort
_raw_config
and_create_args
Complete Implementation
reasoning_effort
is in OpenAI API parametersreasoning_effort
toCreateArguments
TypedDict in config/init.pyreasoning_effort
toCreateArgumentsConfigModel
Pydantic modelChanges Made
1. Configuration Models Updated (
config/__init__.py
)2. Comprehensive Test Suite Added
3. End-to-End Validation Results
✅ Original Issue Reproduction Case:
✅ All Validation Scenarios Pass:
✅ Both Client Types Supported:
OpenAIChatCompletionClient
AzureOpenAIChatCompletionClient
ChatCompletionClient.load_component()
✅ Code Quality:
Impact
This fix enables users to configure reasoning effort levels for GPT-5 models in AutoGen, allowing for control over the computational resources and response quality for reasoning tasks. The parameter is documented to explain its purpose and valid values.
Original prompt
✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.