fix(tsdb): backport zero sample schema optimization from pr 17071 #1001
Merged
fix(tsdb): backport zero sample schema optimization from pr 17071 #1001
Conversation
) * fix(tsdb): backport zero sample schema optimization from pr 17071 There is an optimization in prometheus/prometheus#17071 that ensures that the injected zero histogram sample has the same schema as the next sample. It turns out that if there are at least three samples: a normal sample (schema>0), zero sample(schema=0), normal sample (schema>0) then the histogram rate function will find the lowest schema and normalize all to that, meaning that the normal samples will be downscaled to schema 0. In dashboards it will look like as if we lost the resolution. See https://github.com/prometheus/prometheus/blob/9e4d23ddafcdc00021cd8630e78bb819e84ccac9/promql/functions.go#L344 So the optimization is actually needed to not loose resolution on read back. Alternatively we should use per sample CT as metadata instead, but that would be a big rewrite of TSDB and PromQL. Signed-off-by: György Krajcsovits <[email protected]>
ortuman
approved these changes
Oct 10, 2025
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Cherry pick of #1000 from r360.
There is an optimization in
prometheus/prometheus#17071 that ensures that the injected zero histogram sample has the same schema as the next sample.
It turns out that if there are at least three samples: a normal sample (schema>0), zero sample(schema=0), normal sample (schema>0) then the histogram rate function will find the lowest schema and normalize all to that, meaning that the normal samples will be downscaled to schema 0. In dashboards it will look like as if we lost the resolution.
See https://github.com/prometheus/prometheus/blob/9e4d23ddafcdc00021cd8630e78bb819e84ccac9/promql/functions.go#L344
So the optimization is actually needed to not loose resolution on read back. Alternatively we should use per sample CT as metadata instead, but that would be a big rewrite of TSDB and PromQL.
Which issue(s) does the PR fix:
Does this PR introduce a user-facing change?