Skip to content

How can I use the azure openai api? #44

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
huangbhan opened this issue Mar 12, 2025 · 5 comments
Closed

How can I use the azure openai api? #44

huangbhan opened this issue Mar 12, 2025 · 5 comments

Comments

@huangbhan
Copy link

No description provided.

@rm-openai
Copy link
Collaborator

There are some ways described in https://openai.github.io/openai-agents-python/models/#using-other-llm-providers

The easiest way is probably this:

from openai import AsyncAzureOpenAI

from agents import set_default_openai_client, set_tracing_disabled

set_default_openai_client(AsyncAzureOpenAI(...))

# Disable tracing since Azure doesn't support it
set_tracing_disabled(True)
# or set_tracing_export_api_key(OPENAI_API_KEY)

@rm-openai
Copy link
Collaborator

Feel free to reopen if needed.

@panjianning
Copy link

This works for me:

# For running on jupyter notebook
# import nest_asyncio
# nest_asyncio.apply()

from openai import AsyncOpenAI,AsyncAzureOpenAI,AzureOpenAI
from agents import OpenAIChatCompletionsModel,Agent,Runner
from agents.model_settings import ModelSettings
from agents import set_default_openai_client, set_tracing_disabled

openai_client = AsyncAzureOpenAI(
    api_key=OPENAI_API_KEY,
    api_version="2024-06-01",
    azure_endpoint="https://xxxx.openai.azure.com",
    azure_deployment="gpt-4o-mini"
)

# chat_completion = await openai_client.chat.completions.create(messages=[dict(role="user",content="Hello")],
#                                                               model='gpt-4o-mini')
# print(chat_completion)

set_default_openai_client(openai_client)
set_tracing_disabled(True)

agent = Agent(
    name="Chinese agent",
    instructions="You only speak Chinese.",
    model=OpenAIChatCompletionsModel(
        model="gpt-4o-mini",
        openai_client=openai_client,
    ),
    model_settings=ModelSettings(temperature=0.5),
)

result = Runner.run_sync(agent, "Write a haiku about recursion in programming.")

@LeonG7
Copy link

LeonG7 commented Mar 12, 2025

@rm-openai

I got an error when I used this method to provide LLM services,Not only qwen, but also deepseek and other services have the same error

external_client = AsyncOpenAI(
    api_key="***********************",
    base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
)

agent = Agent(
        name="Joker",
        instructions="You are a helpful assistant.",
    model=OpenAIChatCompletionsModel(
        model="qwen-max",
        openai_client=external_client,
    ),
    model_settings=ModelSettings(temperature=0.6),
    )

---------------------------------------------------------------------------
OpenAIError                               Traceback (most recent call last)
Cell In[5], line 24
     20             print(event.data.delta, end="", flush=True)
     23 if __name__ == "__main__":
---> 24     asyncio.run(main())

File ~/Documents/software/anaconda3/envs/agents/lib/python3.11/site-packages/nest_asyncio.py:30, in _patch_asyncio.<locals>.run(main, debug)
     28 task = asyncio.ensure_future(main)
     29 try:
---> 30     return loop.run_until_complete(task)
     31 finally:
     32     if not task.done():

File ~/Documents/software/anaconda3/envs/agents/lib/python3.11/site-packages/nest_asyncio.py:98, in _patch_loop.<locals>.run_until_complete(self, future)
     95 if not f.done():
     96     raise RuntimeError(
     97         'Event loop stopped before Future completed.')
---> 98 return f.result()

File ~/Documents/software/anaconda3/envs/agents/lib/python3.11/asyncio/futures.py:203, in Future.result(self)
    201 self.__log_traceback = False
    202 if self._exception is not None:
--> 203     raise self._exception.with_traceback(self._exception_tb)
    204 return self._result

File ~/Documents/software/anaconda3/envs/agents/lib/python3.11/asyncio/tasks.py:277, in Task.__step(***failed resolving arguments***)
    273 try:
    274     if exc is None:
    275         # We use the `send` method directly, because coroutines
    276         # don't have `__iter__` and `__next__` methods.
--> 277         result = coro.send(None)
    278     else:
    279         result = coro.throw(exc)

Cell In[5], line 16
      5 async def main():
      6     agent = Agent(
      7         name="Joker",
      8         instructions="You are a helpful assistant.",
   (...)
     13     model_settings=ModelSettings(temperature=0.6),
     14     )
---> 16     result = Runner.run_streamed(agent, input="hello")
     17     async for event in result.stream_events():
     18         print(event.type)

File ~/Documents/software/anaconda3/envs/agents/lib/python3.11/site-packages/agents/run.py:371, in Runner.run_streamed(cls, starting_agent, input, context, max_turns, hooks, run_config)
    369     hooks = RunHooks[Any]()
    370 if run_config is None:
--> 371     run_config = RunConfig()
    373 # If there's already a trace, we don't create a new one. In addition, we can't end the
    374 # trace here, because the actual work is done in `stream_events` and this method ends
    375 # before that.
    376 new_trace = (
    377     None
    378     if get_current_trace()
   (...)
    385     )
    386 )

File <string>:4, in __init__(self, model, model_provider, model_settings, handoff_input_filter, input_guardrails, output_guardrails, tracing_disabled, trace_include_sensitive_data, workflow_name, trace_id, group_id, trace_metadata)

File ~/Documents/software/anaconda3/envs/agents/lib/python3.11/site-packages/agents/models/openai_provider.py:43, in OpenAIProvider.__init__(self, api_key, base_url, openai_client, organization, project, use_responses)
     41     self._client = openai_client
     42 else:
---> 43     self._client = _openai_shared.get_default_openai_client() or AsyncOpenAI(
     44         api_key=api_key or _openai_shared.get_default_openai_key(),
     45         base_url=base_url,
     46         organization=organization,
     47         project=project,
     48         http_client=shared_http_client(),
     49     )
     51 self._is_openai_model = self._client.base_url.host.startswith("api.openai.com")
     52 if use_responses is not None:

File ~/Documents/software/anaconda3/envs/agents/lib/python3.11/site-packages/openai/_client.py:345, in AsyncOpenAI.__init__(self, api_key, organization, project, base_url, websocket_base_url, timeout, max_retries, default_headers, default_query, http_client, _strict_response_validation)
    343     api_key = os.environ.get("OPENAI_API_KEY")
    344 if api_key is None:
--> 345     raise OpenAIError(
    346         "The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable"
    347     )
    348 self.api_key = api_key
    350 if organization is None:

OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
``

@huangbhan
Copy link
Author

@LeonG7 我遇到过这样的问题,当使用external_client = AsyncOpenAI()时,你必须在终端导入环境变量:export OPENAI_API_KEY=""。OPENAI_API_KEY可以为空,可以为任何东西,但你必须导入它。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants