perf(prof): pre-reserve function name buffer#3445
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #3445 +/- ##
==========================================
- Coverage 61.74% 61.64% -0.11%
==========================================
Files 143 143
Lines 13045 13045
Branches 1704 1704
==========================================
- Hits 8055 8041 -14
- Misses 4228 4241 +13
- Partials 762 763 +1 see 3 files with indirect coverage changes Continue to review full report in Codecov by Sentry.
🚀 New features to boost your workflow:
|
Benchmarks [ profiler ]Benchmark execution time: 2025-12-17 19:39:17 Comparing candidate commit 79801a8 in PR branch Found 0 performance improvements and 0 performance regressions! Performance is the same for 29 metrics, 7 unstable metrics. |
realFlowControl
left a comment
There was a problem hiding this comment.
Check the buffer.reserve() vs buffer.reserve_exact() thingy otherwise LGTM!
| let module_len = has_module as usize * "|".len() + module_name.len(); | ||
| let class_name_len = has_class as usize * "::".len() + class_name.len(); |
There was a problem hiding this comment.
Thats a nice and cheap (in CPU cycles) way to avoid a branch
Co-authored-by: Florian Engelhardt <[email protected]>
PROF-12543
Description
When a function name isn't cached, we calculate the function name which has up to 5 "segments":
And with how the code was written, we were doing potentially 5 memory allocations per function name. Of course, the Vec's growth amortization prevented some of these, but not all of them.
This now pre-reserves the size of the function name buffer.
Motivation
Here's the before/after with the whole-host profiler:
The "key" bit is shaving off ~8ms of the ~24ms operation, across a minute. Obviously this is a tiny gain but it is also very easy. It's a good practice to right-size your strings/buffers when you know how large they are ahead of time.
The function cache is reset at the start of each request, so at the start of each request the samples tend to have more uncached function names than ones which happen later on. Coupled with the fact that we are walking the stack on-thread, the code is reasonably hot early on when collecting a sample.
Additional Notes
Today I was able to look at the whole-host profiler's data for the PHP profiler running on the enterprise DOE app that Florian has set up, and I noticed the memory being reallocated in the data.
I have used ddprof before on the PHP profiler, even recently. I believed I saw this recently with ddprof, but it had a larger impact when measured by the whole host profiler. I have no idea which is more accurate, or if we are simply dealing with the fact that we're sampling something that by design happens infrequently, and just got different results accordingly.
Reviewer checklist