-
Notifications
You must be signed in to change notification settings - Fork 1.5k
ollama APIConnectionError #120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I tried #44 and it didn't work out of the box. Had to do a mix of what was on that thread and https://openai.github.io/openai-agents-python/models/ but it works with ollama! This is an example with the starter code:
|
Another issue I ran into was that models that DO NOT support tool calling will NOT work with the handoff example. Any ollama models that support tool calling (https://ollama.com/search?c=tools), will work. Here's a working example with qwen 2.5 with the example agent triage handoff:
|
Uh oh!
There was an error while loading. Please reload this page.
error when using ollama
from agents import Agent, Runner, AsyncOpenAI, ModelSettings, OpenAIChatCompletionsModel, set_default_openai_client
from openai import AsyncOpenAI
external_client = AsyncOpenAI(
api_key="",
base_url="http://localhost:11434/v1",
)
set_default_openai_client(external_client)
spanish_agent = Agent(
name="Spanish agent",
instructions="You only speak English.",
model=OpenAIChatCompletionsModel(
model="llama3.1",
openai_client=external_client,
),
model_settings=ModelSettings(temperature=0.5),
)
res = Runner.run_sync(
starting_agent=spanish_agent,
input="hi"
)
print(res.final_output)
error:
raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.
OPENAI_API_KEY is not set, skipping trace export
The text was updated successfully, but these errors were encountered: