-
-
Notifications
You must be signed in to change notification settings - Fork 63
Description
The OpenAI API was complaining about excessive tokens being requested, despite the fact that I specified :max-tokens 500
in my block. So I checked the code and saw this:
Lines 720 to 723 in cc4a4eb
;; o1 does not support max-tokens | |
(when max-tokens | |
(setq max-tokens nil) | |
(setq max-completion-tokens (or max-tokens 128000)))) |
Is this AI-generated code? It doesn't look like a mistake a human would make. If max-tokens
is non-nil, it sets it to nil, and then attempts to use the value. Of course it always falls back to the hard-coded default of 128000, which causes the API to complain about excessive tokens.
Is this what you meant?
(when max-tokens
(setq max-completion-tokens (or max-tokens 128000))
(setq max-tokens nil))
After changing that, I run into another issue when using the o4-mini model, which uses max_completion_tokens
. Passing even a nil value for max_tokens
causes the following error:
Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.
It looks like the culprit is in the model check: it checks for an "o1" or "o3" prefix, but not an "o4" prefix, even though that is in the list of supported models.
Finally, when using a model that uses max_completion_tokens
, the following error is returned:
Unknown parameter: 'max-completion-tokens'. Did you mean 'max_completion_tokens'?
Looks like a simple typo. Was this even tested after c4b5b68?
(Edited due to confusing myself with the various errors.)