diff --git a/docs/ja/quickstart.md b/docs/ja/quickstart.md index 1a4913c7..d45b7aa4 100644 --- a/docs/ja/quickstart.md +++ b/docs/ja/quickstart.md @@ -186,4 +186,4 @@ if __name__ == "__main__": - [エージェント](agents.md) の設定方法について学ぶ - [エージェントの実行](running_agents.md) について学ぶ -- [tools](tools.md)、[guardrails](guardrails.md)、[models](models.md) について学ぶ \ No newline at end of file +- [tools](tools.md)、[guardrails](guardrails.md)、[models](models/index.md) について学ぶ diff --git a/docs/models.md b/docs/models/index.md similarity index 95% rename from docs/models.md rename to docs/models/index.md index 1ce6e8f2..4cf4f643 100644 --- a/docs/models.md +++ b/docs/models/index.md @@ -70,9 +70,9 @@ You can use other LLM providers in 3 ways (examples [here](https://github.com/op 1. [`set_default_openai_client`][agents.set_default_openai_client] is useful in cases where you want to globally use an instance of `AsyncOpenAI` as the LLM client. This is for cases where the LLM provider has an OpenAI compatible API endpoint, and you can set the `base_url` and `api_key`. See a configurable example in [examples/model_providers/custom_example_global.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_global.py). 2. [`ModelProvider`][agents.models.interface.ModelProvider] is at the `Runner.run` level. This lets you say "use a custom model provider for all agents in this run". See a configurable example in [examples/model_providers/custom_example_provider.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_provider.py). -3. [`Agent.model`][agents.agent.Agent.model] lets you specify the model on a specific Agent instance. This enables you to mix and match different providers for different agents. See a configurable example in [examples/model_providers/custom_example_agent.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_agent.py). +3. [`Agent.model`][agents.agent.Agent.model] lets you specify the model on a specific Agent instance. This enables you to mix and match different providers for different agents. See a configurable example in [examples/model_providers/custom_example_agent.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_agent.py). An easy way to use most available models is via the [LiteLLM integration](./litellm.md). -In cases where you do not have an API key from `platform.openai.com`, we recommend disabling tracing via `set_tracing_disabled()`, or setting up a [different tracing processor](tracing.md). +In cases where you do not have an API key from `platform.openai.com`, we recommend disabling tracing via `set_tracing_disabled()`, or setting up a [different tracing processor](../tracing.md). !!! note @@ -86,7 +86,7 @@ If you get errors related to tracing, this is because traces are uploaded to Ope 1. Disable tracing entirely: [`set_tracing_disabled(True)`][agents.set_tracing_disabled]. 2. Set an OpenAI key for tracing: [`set_tracing_export_api_key(...)`][agents.set_tracing_export_api_key]. This API key will only be used for uploading traces, and must be from [platform.openai.com](https://platform.openai.com/). -3. Use a non-OpenAI trace processor. See the [tracing docs](tracing.md#custom-tracing-processors). +3. Use a non-OpenAI trace processor. See the [tracing docs](../tracing.md#custom-tracing-processors). ### Responses API support diff --git a/docs/models/litellm.md b/docs/models/litellm.md new file mode 100644 index 00000000..90572a28 --- /dev/null +++ b/docs/models/litellm.md @@ -0,0 +1,73 @@ +# Using any model via LiteLLM + +!!! note + + The LiteLLM integration is in beta. You may run into issues with some model providers, especially smaller ones. Please report any issues via [Github issues](https://github.com/openai/openai-agents-python/issues) and we'll fix quickly. + +[LiteLLM](https://docs.litellm.ai/docs/) is a library that allows you to use 100+ models via a single interface. We've added a LiteLLM integration to allow you to use any AI model in the Agents SDK. + +## Setup + +You'll need to ensure `litellm` is available. You can do this by installing the optional `litellm` dependency group: + +```bash +pip install "openai-agents[litellm]" +``` + +Once done, you can use [`LitellmModel`][agents.extensions.models.litellm_model.LitellmModel] in any agent. + +## Example + +This is a fully working example. When you run it, you'll be prompted for a model name and API key. For example, you could enter: + +- `openai/gpt-4.1` for the model, and your OpenAI API key +- `anthropic/claude-3-5-sonnet-20240620` for the model, and your Anthropic API key +- etc + +For a full list of models supported in LiteLLM, see the [litellm providers docs](https://docs.litellm.ai/docs/providers). + +```python +from __future__ import annotations + +import asyncio + +from agents import Agent, Runner, function_tool, set_tracing_disabled +from agents.extensions.models.litellm_model import LitellmModel + +@function_tool +def get_weather(city: str): + print(f"[debug] getting weather for {city}") + return f"The weather in {city} is sunny." + + +async def main(model: str, api_key: str): + agent = Agent( + name="Assistant", + instructions="You only respond in haikus.", + model=LitellmModel(model=model, api_key=api_key), + tools=[get_weather], + ) + + result = await Runner.run(agent, "What's the weather in Tokyo?") + print(result.final_output) + + +if __name__ == "__main__": + # First try to get model/api key from args + import argparse + + parser = argparse.ArgumentParser() + parser.add_argument("--model", type=str, required=False) + parser.add_argument("--api-key", type=str, required=False) + args = parser.parse_args() + + model = args.model + if not model: + model = input("Enter a model name for Litellm: ") + + api_key = args.api_key + if not api_key: + api_key = input("Enter an API key for Litellm: ") + + asyncio.run(main(model, api_key)) +``` diff --git a/docs/quickstart.md b/docs/quickstart.md index 9513fdb8..213d16e5 100644 --- a/docs/quickstart.md +++ b/docs/quickstart.md @@ -186,4 +186,4 @@ Learn how to build more complex agentic flows: - Learn about how to configure [Agents](agents.md). - Learn about [running agents](running_agents.md). -- Learn about [tools](tools.md), [guardrails](guardrails.md) and [models](models.md). +- Learn about [tools](tools.md), [guardrails](guardrails.md) and [models](models/index.md). diff --git a/docs/ref/extensions/litellm.md b/docs/ref/extensions/litellm.md new file mode 100644 index 00000000..7bd67fde --- /dev/null +++ b/docs/ref/extensions/litellm.md @@ -0,0 +1,3 @@ +# `LiteLLM Models` + +::: agents.extensions.models.litellm_model diff --git a/mkdocs.yml b/mkdocs.yml index e7f43993..286359dc 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -66,7 +66,9 @@ plugins: - context.md - guardrails.md - multi_agent.md - - models.md + - Models: + - models/index.md + - models/litellm.md - config.md - visualization.md - Voice agents: @@ -123,6 +125,7 @@ plugins: - Extensions: - ref/extensions/handoff_filters.md - ref/extensions/handoff_prompt.md + - ref/extensions/litellm.md - locale: ja name: 日本語 diff --git a/src/agents/extensions/models/litellm_model.py b/src/agents/extensions/models/litellm_model.py index 915a1ccb..0fc277c3 100644 --- a/src/agents/extensions/models/litellm_model.py +++ b/src/agents/extensions/models/litellm_model.py @@ -47,6 +47,11 @@ class LitellmModel(Model): + """This class enables using any model via LiteLLM. LiteLLM allows you to acess OpenAPI, + Anthropic, Gemini, Mistral, and many other models. + See supported models here: [litellm models](https://docs.litellm.ai/docs/providers). + """ + def __init__( self, model: str, @@ -140,9 +145,6 @@ async def stream_response( *, previous_response_id: str | None, ) -> AsyncIterator[TResponseStreamEvent]: - """ - Yields a partial message as it is generated, as well as the usage information. - """ with generation_span( model=str(self.model), model_config=dataclasses.asdict(model_settings)