feat(span): remove clone bound on spantext#1266
Conversation
# Motivation Cloning is very cheap as long as we have a ByteString or a str slice, but if we want to use python strings we need to hold the GIL to clone them. This is not possible in general... But there are no operations requiring us to actual clone the data. # Changes * use an indexmap for the shared dictionary. This datastructure is exactly what we want (O(1) access and insertion order iteration) and keep only one of each value * Use indices into the span slice while computing top level rather than copying the service string
|
✅ Tests 🎉 All green!❄️ No new flaky tests detected 🔗 Commit SHA: 70351d5 | Docs | Datadog PR Page | Was this helpful? Give us feedback! |
# Motivation When we serialize we store the uppper 64 bits of the trace id in a meta tag. But at runtime, we don't want to split it as the trace id still needs to be readable.
e990211 to
2445a48
Compare
BenchmarksComparisonBenchmark execution time: 2025-11-07 16:16:50 Comparing candidate commit 70351d5 in PR branch Found 1 performance improvements and 13 performance regressions! Performance is the same for 41 metrics, 2 unstable metrics. scenario:benching serializing traces from their internal representation to msgpack
scenario:credit_card/is_card_number/37828224631000521389798
scenario:credit_card/is_card_number/x371413321323331
scenario:credit_card/is_card_number_no_luhn/ 378282246310005
scenario:credit_card/is_card_number_no_luhn/378282246310005
scenario:credit_card/is_card_number_no_luhn/37828224631000521389798
scenario:credit_card/is_card_number_no_luhn/x371413321323331
scenario:single_flag_killswitch/rules-based
CandidateCandidate benchmark detailsGroup 1
Group 2
Group 3
Group 4
Group 5
Group 6
Group 7
Group 8
Group 9
Group 10
Group 11
Group 12
Group 13
Group 14
Group 15
Group 16
Group 17
BaselineOmitted due to size. |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #1266 +/- ##
==========================================
- Coverage 71.86% 71.66% -0.20%
==========================================
Files 370 370
Lines 58499 58557 +58
==========================================
- Hits 42042 41967 -75
- Misses 16457 16590 +133
🚀 New features to boost your workflow:
|
Artifact Size Benchmark Reportaarch64-alpine-linux-musl
aarch64-unknown-linux-gnu
libdatadog-x64-windows
libdatadog-x86-windows
x86_64-alpine-linux-musl
x86_64-unknown-linux-gnu
|
|
/merge |
|
View all feedbacks in Devflow UI.
This pull request is not mergeable according to GitHub. Common reasons include pending required checks, missing approvals, or merge conflicts — but it could also be blocked by other repository rules or settings.
The expected merge time in
|
feat(trace-utils): Remove clone bound on SpanText # Motivation Cloning is very cheap as long as we have a ByteString or a str slice, but if we want to use python strings we need to hold the GIL to clone them. This is not possible in general... But there are no operations requiring us to actual clone the data. # Changes * use an indexmap for the shared dictionary. This datastructure is exactly what we want (O(1) access and insertion order iteration) and keep only one of each value * Use indices into the span slice while computing top level rather than copying the service string fix: license 3rd party feat(trace-utils): widen trace id to 128 bits # Motivation When we serialize we store the uppper 64 bits of the trace id in a meta tag. But at runtime, we don't want to split it as the trace id still needs to be readable. fix: skip high bits in 128bit trace id serialization Merge branch 'main' into paullgdc/trace-utils/remove_clone_on_spantext Merge branch 'main' into paullgdc/trace-utils/remove_clone_on_spantext fix: 3rpaty + lints Co-authored-by: paul.legranddescloizeaux <[email protected]>
Motivation
Cloning is very cheap as long as we have a ByteString or a str slice, but if we want to use python strings we need to hold the GIL to clone them.
This is not possible in general... But there are no operations requiring us to actual clone the data.
Changes