Skip to content

fix(tracer): use fixed-point format for _dd.p.ksr tag#4603

Merged
bm1549 merged 3 commits intomainfrom
brian.marks/fix-knuth-sample-rate-format
Mar 26, 2026
Merged

fix(tracer): use fixed-point format for _dd.p.ksr tag#4603
bm1549 merged 3 commits intomainfrom
brian.marks/fix-knuth-sample-rate-format

Conversation

@bm1549
Copy link
Copy Markdown
Contributor

@bm1549 bm1549 commented Mar 26, 2026

What does this PR do?

Fixes formatKnuthSamplingRate to use fixed-point ('f') formatting instead of general ('g') formatting. The 'g' format uses significant digits, which produces scientific notation for very small rates (e.g. "1e-06" instead of "0.000001"). The new implementation uses strconv.AppendFloat with a stack-allocated buffer and in-place trimming to avoid heap allocations.

Before: formatKnuthSamplingRate(0.000001)"1e-06"
After: formatKnuthSamplingRate(0.000001)"0.000001"

Motivation

The Test_Knuth_Sample_Rate.test_sampling_knuth_sample_rate_trace_sampling_rule parametric system tests (added in system-tests#6466) enforce precision-boundary cases that fail with the current 'g' format:

Input 'g' (before) 'f' (after) Expected
0.000001 "1e-06" "0.000001" "0.000001"
0.0000001 "1e-07" "0" "0"
0.00000051 "5.1e-07" "0.000001" "0.000001"

Benchmark

Old ('g' format):
BenchmarkFormatKnuthSamplingRate-10    35274837    34.64 ns/op    5 B/op    0 allocs/op

New (AppendFloat + trim):
BenchmarkFormatKnuthSamplingRate-10    30811003    39.17 ns/op    5 B/op    0 allocs/op

Same 0 allocs/op, same 5 B/op. ~4.5ns overhead from 'f' formatting.

Reviewer's Checklist

  • Changed code has unit tests for its functionality at or near 100% coverage.
  • System-Tests covering this feature have been added and enabled with the va.b.c-dev version tag.
  • There is a benchmark for any new code, or changes to existing code.
  • If this interacts with the agent in a new way, a system test has been added.
  • New code is free of linting errors. You can check this by running make lint locally.
  • New code doesn't break existing tests. You can check this by running make test locally.
  • Add an appropriate team label so this PR gets put in the right place for the release notes.
  • All generated files are up to date. You can check this by running make generate locally.
  • Non-trivial go.mod changes, e.g. adding new modules, are reviewed by @DataDog/dd-trace-go-guild. Make sure all nested modules are up to date by running make fix-modules locally.

🤖 Generated with Claude Code

formatKnuthSamplingRate was using 'g' (significant digits) format which
produces scientific notation for very small rates (e.g. "1e-06" instead
of "0.000001"). Switch to 'f' (fixed-point) format with 6 decimal
places to match the cross-language spec enforced by system tests.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
@bm1549 bm1549 added the AI Generated Largely based on code generated by an AI or LLM. This label is the same across all dd-trace-* repos label Mar 26, 2026
@codecov
Copy link
Copy Markdown

codecov bot commented Mar 26, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 60.68%. Comparing base (ee1fdc0) to head (b6922ba).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
Files with missing lines Coverage Δ
ddtrace/tracer/sampler.go 95.72% <100.00%> (+0.11%) ⬆️

... and 269 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@datadog-prod-us1-3
Copy link
Copy Markdown

datadog-prod-us1-3 bot commented Mar 26, 2026

✅ Tests

🎉 All green!

❄️ No new flaky tests detected
🧪 All tests passed

🎯 Code Coverage (details)
Patch Coverage: 100.00%
Overall Coverage: 60.07% (+0.02%)

This comment will be updated automatically if new data arrives.
🔗 Commit SHA: db7b27f | Docs | Datadog PR Page | Was this helpful? React with 👍/👎 or give us feedback!

@pr-commenter
Copy link
Copy Markdown

pr-commenter bot commented Mar 26, 2026

Benchmarks

Benchmark execution time: 2026-03-26 13:41:37

Comparing candidate commit db7b27f in PR branch brian.marks/fix-knuth-sample-rate-format with baseline commit ee1fdc0 in branch main.

Found 0 performance improvements and 0 performance regressions! Performance is the same for 215 metrics, 9 unstable metrics.

Explanation

This is an A/B test comparing a candidate commit's performance against that of a baseline commit. Performance changes are noted in the tables below as:

  • 🟩 = significantly better candidate vs. baseline
  • 🟥 = significantly worse candidate vs. baseline

We compute a confidence interval (CI) over the relative difference of means between metrics from the candidate and baseline commits, considering the baseline as the reference.

If the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD), the change is considered significant.

Feel free to reach out to #apm-benchmarking-platform on Slack if you have any questions.

More details about the CI and significant changes

You can imagine this CI as a range of values that is likely to contain the true difference of means between the candidate and baseline commits.

CIs of the difference of means are often centered around 0%, because often changes are not that big:

---------------------------------(------|---^--------)-------------------------------->
                              -0.6%    0%  0.3%     +1.2%
                                 |          |        |
         lower bound of the CI --'          |        |
sample mean (center of the CI) -------------'        |
         upper bound of the CI ----------------------'

As described above, a change is considered significant if the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD).

For instance, for an execution time metric, this confidence interval indicates a significantly worse performance:

----------------------------------------|---------|---(---------^---------)---------->
                                       0%        1%  1.3%      2.2%      3.1%
                                                  |   |         |         |
       significant impact threshold --------------'   |         |         |
                      lower bound of CI --------------'         |         |
       sample mean (center of the CI) --------------------------'         |
                      upper bound of CI ----------------------------------'

@bm1549 bm1549 marked this pull request as ready for review March 26, 2026 01:24
@bm1549 bm1549 requested review from a team as code owners March 26, 2026 01:24
Comment on lines +127 to +130
s := strconv.FormatFloat(rate, 'f', 6, 64)
s = strings.TrimRight(s, "0")
s = strings.TrimRight(s, ".")
return s
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd recommend benchmarking the old implementation against the new.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a benchmark in b6922ba. Results:

Old ('g' format):
BenchmarkFormatKnuthSamplingRate-10    35274837    34.64 ns/op    5 B/op    0 allocs/op

New ('f' + TrimRight):
BenchmarkFormatKnuthSamplingRate-10    30153468    39.87 ns/op    8 B/op    1 allocs/op

~5ns/op overhead with 1 extra allocation (8 bytes). This function is called once per root span, so the difference should be negligible in practice. Does this look acceptable to you?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bm1549 We are battling allocations one by one. I can take care of it, as the improvement is clear, but we need to squeeze that allocation.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Squeezed it out in db7b27f. Switched to strconv.AppendFloat with a stack-allocated [24]byte buffer and in-place trimming — 0 allocs now:

Old ('g' format):
BenchmarkFormatKnuthSamplingRate-10    35274837    34.64 ns/op    5 B/op    0 allocs/op

New (AppendFloat + trim):
BenchmarkFormatKnuthSamplingRate-10    30811003    39.17 ns/op    5 B/op    0 allocs/op

Same 0 allocs, same 5 B/op. The ~4.5ns difference is just 'f' formatting being slightly more work than 'g'.

bm1549 and others added 2 commits March 26, 2026 08:47
Requested by reviewer to compare old ('g') vs new ('f') implementation.
Results show ~5ns/op overhead (35→40 ns/op), 1 extra alloc (8B), which
is negligible since this runs once per root span.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Use strconv.AppendFloat with a stack-allocated [24]byte buffer and
in-place trimming instead of FormatFloat + strings.TrimRight, which
was creating an extra heap allocation. The new implementation has
0 allocs/op matching the original 'g' format implementation.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
@darccio
Copy link
Copy Markdown
Member

darccio commented Mar 26, 2026

@codex review

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex Review: Didn't find any major issues. 🚀

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@bm1549 bm1549 enabled auto-merge (squash) March 26, 2026 14:39
@bm1549 bm1549 merged commit aca1a62 into main Mar 26, 2026
210 checks passed
@bm1549 bm1549 deleted the brian.marks/fix-knuth-sample-rate-format branch March 26, 2026 15:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

AI Generated Largely based on code generated by an AI or LLM. This label is the same across all dd-trace-* repos

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants