Why doesn't the Responses API support logprobs?

The create chat completions API has a logprobs parameter that returns a logprob for every output token.

The create model response API doesn’t have this parameter. Am I missing something or is logprobs intentionally not supported for Responses?

I’d love to try out the new Responses API, but I really need logprobs for my work.

1 Like

There’s no value in Responses’ response object for even containing them in the future.

The omission is the opposite of the “reasoning summary” possible in the future - being returned in the output list even on models that will not give it to the API. So if logprobs were planned on Responses, the output of them might have be documented before implementation, but isn’t.

Logprobs have been flaky, quantized, error-prone, etc, recently, along with other parameters that just don’t work. They reveal the increasing non-determinism. So with other OpenAI inattention to presenting or tuning internals, and logprobs facilitating revelations undesirable to OpenAI, I am not holding my breath.

Another: unifying the API. You can’t call o1-mini or o1-preview. No support of developer instruction or others? Similar motivation, no support for them on reasoning models that are the path forward, may be the reason to just “drop”.

Would love to have OpenAI prove me wrong with their inclusion.

1 Like

Bummer, logprobs would be so helpful in Responses! I hope that OpenAI reconsiders.

1 Like