You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The previous CodSpeed run on #204 showed ±10-16% drift on tasks that
weren't touched by the buffers() work (e.g. cached-source: size() (cached),
cached-source: source() (cold), concat-source: getChildren()). Root cause
is module-level shared state (warmed, warmedConcat, sink, *SourceLike)
that grows whenever a new task is added to a case file. That state
perturbs V8's hidden-class cache and GC heap layout for every task in the
file, so adding a new task shifts pre-existing tasks' measurements.
Two related fixes:
1. Teach the CodSpeed bridge about tinybench's beforeAll/beforeEach/
afterEach/afterAll hooks. The walltime path already honored them via
tinybench itself, but the simulation path was just calling task.fn()
raw, so any hook-based fixture setup was ignored. The bridge now runs
beforeAll before warmup, beforeEach around every iteration (warmup and
instrumented), afterEach after each, and afterAll after the
instrumented pass. global.gc() still runs right before the
instrumented call, after beforeEach.
2. Move heavy fixtures out of module scope into per-task beforeAll
closures. Each case file's warmed/warmedConcat/warmLayeredChunk/
sourceLike/richSourceLike/sink now lives inside register() and is
assigned in beforeAll and nulled in afterAll, so the set of tasks in
a file only retains memory for the task currently running. Adding a
future task to any of these files should no longer shift the
pre-existing tasks' measurements.
Also bumped warmupIterations from 2 to 10 in run.mjs so V8 hidden-class
caches and the GC heap settle before the measured iteration.
Side-effect visible in wall-clock numbers: new ConcatSource buffer/
buffers tasks now measure just the method call (construction moved to
beforeAll), so the ratio grew from ~3x to ~10x on the flat 10-raw case
and from ~2.4x to ~12x on nested 4x10. That's the honest comparison —
the previous number was diluted by per-task fixture construction.
0 commit comments