You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Optimize AddCaller+AddStacktrace by sharing stacks (#1052)
(Supersedes #808.)
If we use the `AddCaller(true)` option for a logger with
`AddStacktrace(level)`, which captures a stack trace for the specified
level or higher, Zap currently captures inspects the stack twice:
- `getCallerFrame` retrieves information about the immediate caller of
the log entry
- `StackSkip` calls `takeStacktrace` to capture the entire stack trace
and build a string from it
For the case where both caller and stack trace are requested,
we can avoid redundant work by sharing information about the call stack
between the logic for the two.
To accomplish this, the following high-level pieces were added:
type stacktrace
captureStacktrace(skip int, depth stacktraceDepth) *stacktrace
func (*stacktrace) Next() (runtime.Frame, bool)
type stackFormatter
func newStackFormatter(*buffer.Buffer)
func (*stackFormatter) FormatStack(*stacktrace)
func (*stackFormatter) FormatFrame(runtime.Frame)
`captureStacktrace` captures stack traces (either just one frame or the
entire stack) that can then be inspected manually (to extract caller
information) and formatted into a string representation either
one-frame-at-a-time or wholesale with `stackFormatter`.
This allows us reorganize logic that applies the AddCaller and
AddStacktrace options so that it can:
- capture as much of the stack as it needs
- extract caller information from the first frame if we need it
- format that frame and the remaining frames into a stack trace
---
Benchmarking:
I ran the new benchmark included in this PR, as well as the existing
stack trace/caller based benchmarks 10 times for 10 seconds each.
```
% go test -run '^$' -bench 'Caller|Stack' -benchmem -benchtime 10s -count 10
[...]
name old time/op new time/op delta
StackField-96 834ns ± 8% 819ns ±14% ~ (p=0.971 n=10+10)
AddCallerHook-96 466ns ± 7% 480ns ± 1% ~ (p=0.118 n=10+6)
AddCallerAndStacktrace-96 1.16µs ± 4% 0.78µs ± 8% -32.60% (p=0.000 n=9+10)
TakeStacktrace-96 2.66µs ± 2% 2.74µs ± 2% +3.01% (p=0.000 n=9+10)
name old alloc/op new alloc/op delta
StackField-96 542B ± 0% 544B ± 0% +0.33% (p=0.000 n=10+10)
AddCallerHook-96 254B ± 0% 248B ± 0% -2.36% (p=0.000 n=10+10)
AddCallerAndStacktrace-96 652B ± 0% 429B ± 0% -34.20% (p=0.000 n=9+6)
TakeStacktrace-96 420B ± 0% 421B ± 0% +0.24% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
StackField-96 3.00 ± 0% 3.00 ± 0% ~ (all equal)
AddCallerHook-96 3.00 ± 0% 2.00 ± 0% -33.33% (p=0.000 n=10+10)
AddCallerAndStacktrace-96 5.00 ± 0% 3.00 ± 0% -40.00% (p=0.000 n=10+10)
TakeStacktrace-96 2.00 ± 0% 2.00 ± 0% ~ (all equal)
```
Summary:
- AddCaller+AddStacktrace takes 2 fewer allocations and is ~30% faster
- AddCaller costs one allocation less
- takeStacktrace costs 3% more CPU
I think that the overall improvement is worth the slightly more expensive stack traces.
Refs GO-1181
0 commit comments