Skip to content

[Docs Bug] RunConfig.model does not override Agent.model as stated in documentation #1007

Closed
@Zoha-Khan123

Description

@Zoha-Khan123

Describe the bug
According to the Agents SDK documentation, the model field in RunConfig is supposed to apply a global LLM model "irrespective of what model each Agent has." However, in practice, if the Agent has its own model specified, it overrides the one provided in RunConfig.

To Reproduce
Steps to reproduce the behavior:

from agents import Agent, Runner, AsyncOpenAI, OpenAIChatCompletionsModel
from agents.run import RunConfig
import os, asyncio

external_client = AsyncOpenAI(
    api_key="FAKE_API_KEY",
    base_url="https://generativelanguage.googleapis.com/v1beta/"
)

model = OpenAIChatCompletionsModel(
    model="gemini-1.5-flash",
    openai_client=external_client,
)

config = RunConfig(
    model=model,
    model_provider=external_client,
)

agent = Agent(
    name="Assistant",
    instructions="You are a helpful assistant",
    model="gemini-2.0-flash" 
)

async def test():
    result = await Runner.run(agent, "Hello", run_config=config)
    print(result)

asyncio.run(test())

Expected model: gemini-1.5-flash (from RunConfig)
Actual model: gemini-2.0-flash (from Agent)

Expected behavior
The documentation claims the RunConfig.model should be used globally, overriding per-agent settings. But in reality, the Agent.model takes precedence.

Suggested fix
Either:

Update the documentation to reflect that Agent.model overrides RunConfig.model, or

Change the SDK behavior to allow true global override when using RunConfig.model.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions