Skip to content

Custom model provider ignored when using agents as tools #663

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
rolshoven opened this issue May 7, 2025 · 1 comment
Open

Custom model provider ignored when using agents as tools #663

rolshoven opened this issue May 7, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@rolshoven
Copy link

rolshoven commented May 7, 2025

Describe the question

When using a custom model provider that is passed as part of the RunConfig in Runner.run, the model provider is ignored when other agents are used as tools within a triage agent. The reason is that the Agent.as_tool function creates its own call to Runner.run, which does not include the current run config. A fix would be to add an additional optional argument run_config to Agent.as_tool. Alternatively, if we really just care about the model provider, we could have a model_provider parameter in the Agent.as_tool signature instead of the proposed run_config argument.

The current implementation just ignores the model provider called in the initial Runner.run and switches to the default OpenAIResponsesModel.

Debug information

  • Agents SDK version: 0.0.13
  • Python version 3.11.11

Repro steps

Use the following script together with an OpenRouter API key to reproduce (or you could change it to an OllamaProvider so that you don't need an OpenRouter API key:

import asyncio
import os
from agents import (
    Agent,
    ItemHelpers,
    MessageOutputItem,
    Model,
    ModelProvider,
    OpenAIChatCompletionsModel,
    RunConfig,
    Runner,
    set_tracing_disabled,
)
from openai import AsyncOpenAI


client = AsyncOpenAI(base_url="https://openrouter.ai/api/v1", api_key=os.getenv("OPENROUTER_API_KEY"))
set_tracing_disabled(disabled=True)


class OpenRouterModelProvider(ModelProvider):
    def get_model(self, model_name: str | None) -> Model:
        return OpenAIChatCompletionsModel("meta-llama/llama-3.3-70b-instruct", openai_client=client)


spanish_agent = Agent(
    name="spanish_agent",
    instructions="You translate the user's message to Spanish",
    handoff_description="An english to spanish translator",
)


orchestrator_agent = Agent(
    name="orchestrator_agent",
    instructions=(
        "You are a translation agent. You use the tools given to you to translate."
        "If asked for multiple translations, you call the relevant tools in order."
        "You never translate on your own, you always use the provided tools."
    ),
    tools=[
        spanish_agent.as_tool(
            tool_name="translate_to_spanish",
            tool_description="Translate the user's message to Spanish",
        ),
    ],
)

synthesizer_agent = Agent(
    name="synthesizer_agent",
    instructions="You inspect translations, correct them if needed, and produce a final concatenated response.",
)


async def main():
    msg = input("Hi! What would you like translated, and to which languages? ")

    orchestrator_result = await Runner.run(
        orchestrator_agent, msg, run_config=RunConfig(model_provider=OpenRouterModelProvider())
    )

    for item in orchestrator_result.new_items:
        if isinstance(item, MessageOutputItem):
            text = ItemHelpers.text_message_output(item)
            if text:
                print(f"  - Translation step: {text}")

    synthesizer_result = await Runner.run(synthesizer_agent, orchestrator_result.to_input_list())

    print(f"\n\nFinal response:\n{synthesizer_result.final_output}")


if __name__ == "__main__":
    asyncio.run(main())

This results in the following exception if no OpenAI api key is set:

openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

Expected behavior

I would have expected that the Agent.as_tool function would still use the same run config as the current agent is using during the run. If not, I would have expected that I can pass a run config to as_tool such that the custom model provider is still used.

@rolshoven rolshoven added the bug Something isn't working label May 7, 2025
@Ddper
Copy link
Contributor

Ddper commented May 9, 2025

I ran into the same issue and solved by set model on each agent.

import asyncio
import os
from agents import (
    Agent,
    ItemHelpers,
    MessageOutputItem,
    Model,
    ModelProvider,
    OpenAIChatCompletionsModel,
    RunConfig,
    Runner,
    set_tracing_disabled,
    set_default_openai_client,
)
from openai import AsyncOpenAI


client = AsyncOpenAI(base_url="https://api.deepseek.com", api_key="****")
set_default_openai_client(client, use_for_tracing=False)
set_tracing_disabled(disabled=True)


class CustomModelProvider(ModelProvider):
    def get_model(self, model_name: str | None) -> Model:
        return OpenAIChatCompletionsModel("deepseek-chat", openai_client=client)


spanish_agent = Agent(
    name="spanish_agent",
    instructions="You translate the user's message to Spanish",
    model=OpenAIChatCompletionsModel(model="deepseek-chat", openai_client=client),
    handoff_description="An english to spanish translator",
)


orchestrator_agent = Agent(
    name="orchestrator_agent",
    instructions=(
        "You are a translation agent. You use the tools given to you to translate."
        "If asked for multiple translations, you call the relevant tools in order."
        "You never translate on your own, you always use the provided tools."
    ),
    tools=[
        spanish_agent.as_tool(
            tool_name="translate_to_spanish",
            tool_description="Translate the user's message to Spanish",
        ),
    ],
)

synthesizer_agent = Agent(
    name="synthesizer_agent",
    instructions="You inspect translations, correct them if needed, and produce a final concatenated response.",
)


async def main():
    msg = input("Hi! What would you like translated, and to which languages? ")

    orchestrator_result = await Runner.run(
        orchestrator_agent, msg, run_config=RunConfig(model_provider=CustomModelProvider())
    )

    for item in orchestrator_result.new_items:
        if isinstance(item, MessageOutputItem):
            text = ItemHelpers.text_message_output(item)
            if text:
                print(f"  - Translation step: {text}")

    synthesizer_result = await Runner.run(synthesizer_agent, orchestrator_result.to_input_list())

    print(f"\n\nFinal response:\n{synthesizer_result.final_output}")


if __name__ == "__main__":
    asyncio.run(main())

But deepseek does not support response api so I get the error below:

Error getting response: Error code: 404 - {'error_msg': 'Not Found. Please check the configuration.'}. (request_id: None)
Traceback (most recent call last):
  File "/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py", line 1570, in _exec
    pydev_imports.execfile(file, globals, locals)  # execute the script
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/Users/will/workspace/openai-agents-python/examples/agent_patterns/agents_as_tools_custom_provider.py", line 76, in <module>
    asyncio.run(main())
  File "/Users/will/.pyenv/versions/3.12.9/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/Users/will/.pyenv/versions/3.12.9/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/will/.pyenv/versions/3.12.9/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/Users/will/workspace/openai-agents-python/examples/agent_patterns/agents_as_tools_custom_provider.py", line 70, in main
    synthesizer_result = await Runner.run(synthesizer_agent, orchestrator_result.to_input_list())
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/will/workspace/openai-agents-python/src/agents/run.py", line 218, in run
    input_guardrail_results, turn_result = await asyncio.gather(
                                           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/will/workspace/openai-agents-python/src/agents/run.py", line 760, in _run_single_turn
    new_response = await cls._get_new_response(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/will/workspace/openai-agents-python/src/agents/run.py", line 919, in _get_new_response
    new_response = await model.get_response(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/will/workspace/openai-agents-python/src/agents/models/openai_responses.py", line 76, in get_response
    response = await self._fetch_response(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/will/workspace/openai-agents-python/src/agents/models/openai_responses.py", line 242, in _fetch_response
    return await self._client.responses.create(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/will/workspace/openai-agents-python/.venv/lib/python3.12/site-packages/openai/resources/responses/responses.py", line 1529, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/Users/will/workspace/openai-agents-python/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1742, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/will/workspace/openai-agents-python/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1549, in request
    raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'error_msg': 'Not Found. Please check the configuration.'}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants