You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
is gone. However, I now encounter this new warning and error:
2025/05/12 11:39:52 WARNING mlflow.openai._openai_autolog: Encountered unexpected error when ending trace: 'NotGiven' object is not iterable
Traceback (most recent call last):
File ".../mlflow/openai/_openai_autolog.py", line 352, in _end_span_on_success
set_span_chat_attributes(span, inputs, result)
File ".../mlflow/openai/utils/chat_schema.py", line 46, in set_span_chat_attributes
if tools := _parse_tools(inputs):
File ".../mlflow/openai/utils/chat_schema.py", line 261, in _parse_tools
for tool in tools:
TypeError: 'NotGiven' object is not iterable
My setup:
import asyncio
import mlflow
from agents import Agent, OpenAIChatCompletionsModel, Runner
from agents.model_settings import ModelSettings
from openai import AsyncOpenAI
from agents import set_default_openai_client, set_tracing_disabled, set_trace_processors
external_client = AsyncOpenAI(
base_url="http://localhost:11434/v1",
api_key="ollama", # required, but unused
)
# Enable auto tracing for OpenAI Agents SDK
mlflow.openai.autolog()
# Optional: Set a tracking URI and an experiment
mlflow.set_tracking_uri("http://localhost:8080")
mlflow.set_experiment("OpenAI Agent")
set_trace_processors([])
# Define a simple multi-agent workflow
vietnamese_agent = Agent(
name="Vietnamese agent",
instructions="You only speak Vietnamese.",
model=OpenAIChatCompletionsModel(
model="qwen3:0.6b",
openai_client=external_client,
),
)
english_agent = Agent(
name="English agent",
instructions="You only speak English",
model=OpenAIChatCompletionsModel(
model="qwen3:0.6b",
openai_client=external_client,
),
)
triage_agent = Agent(
name="Triage agent",
instructions="Handoff to the appropriate agent based on the language of the request.",
handoffs=[vietnamese_agent, english_agent],
model=OpenAIChatCompletionsModel(
model="qwen3:0.6b",
openai_client=external_client,
),
)
async def main():
result = await Runner.run(triage_agent, input="Xin chào, bạn có khỏe không?")
print(result.final_output)
# If you are running this code in a Jupyter notebook, replace this with `await main()`.
if __name__ == "__main__":
asyncio.run(main())
Output
python tracing_example.py
INFO:httpx:HTTP Request: POST http://localhost:11434/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST http://localhost:11434/v1/chat/completions "HTTP/1.1 200 OK"
2025/05/12 11:39:52 WARNING mlflow.openai._openai_autolog: Encountered unexpected error when ending trace: 'NotGiven' object is not iterable
Traceback (most recent call last):
File "/home/namdv/anaconda3/envs/chatbot/lib/python3.10/site-packages/mlflow/openai/_openai_autolog.py", line 352, in _end_span_on_success
set_span_chat_attributes(span, inputs, result)
File "/home/namdv/anaconda3/envs/chatbot/lib/python3.10/site-packages/mlflow/openai/utils/chat_schema.py", line 46, in set_span_chat_attributes
if tools := _parse_tools(inputs):
File "/home/namdv/anaconda3/envs/chatbot/lib/python3.10/site-packages/mlflow/openai/utils/chat_schema.py", line 261, in _parse_tools
for tool in tools:
TypeError: 'NotGiven' object is not iterable
<think>
Okay, the user sent a greeting in Vietnamese. I need to respond in Vietnamese. Let me check the previous messages again. The user asked, "Xin chào, bạn có khỏe không?" which translates to "Hello, are you healthy?" in Vietnamese. My role is to keep Vietnamese. So, I should respond in Vietnamese as well. Make sure the answer is polite and friendly. Maybe say something like "Hello! Are you healthy?" and offer a response. Keep it simple.
</think>
Xin chào! Bạn có khỏe không? 😊
Questions:
How can I avoid this 'NotGiven' object is not iterable error when using mlflow.openai.autolog() with Ollama backend?
Is there a better way to handle tracing (or disable it cleanly) when using non-OpenAI models?
Is there any documentation or example code on implementing and registering custom tracing processors?
Thanks in advance!
The text was updated successfully, but these errors were encountered:
Please read this first
Question
I'm trying out tracing following the example in the MLflow OpenAI Agent integration guide, using a local Ollama model.
After following the suggestion in this issue, the previous error
is gone. However, I now encounter this new warning and error:
My setup:
Output
Questions:
Thanks in advance!
The text was updated successfully, but these errors were encountered: