There’s no value in Responses’ response object for even containing them in the future.
The omission is the opposite of the “reasoning summary” possible in the future - being returned in the output list even on models that will not give it to the API. So if logprobs were planned on Responses, the output of them might have be documented before implementation, but isn’t.
Logprobs have been flaky, quantized, error-prone, etc, recently, along with other parameters that just don’t work. They reveal the increasing non-determinism. So with other OpenAI inattention to presenting or tuning internals, and logprobs facilitating revelations undesirable to OpenAI, I am not holding my breath.
Another: unifying the API. You can’t call o1-mini or o1-preview. No support of developer instruction or others? Similar motivation, no support for them on reasoning models that are the path forward, may be the reason to just “drop”.
Would love to have OpenAI prove me wrong with their inclusion.