feat: allow overridding task function durations#329
feat: allow overridding task function durations#329jerome-benoit merged 1 commit intotinylibs:mainfrom
Conversation
commit: |
|
I don't have strong opinions in terms of API design. An alternative API could be something like this: ```javascript
bench.add('my task', (handle) => {
let endTime;
function ReportEndTime() {
endTime = bench.opts.now();
}
function BenchmarkComponent() {
return (
<>
{someViews}
<ReportEndTime />
</>
);
}
const startTime = bench.opts.now();
render(<BenchmarkComponent />);
handle.overrideDuration(endTime - startTime);
}); |
|
Sounds reasonable to propose an object with a keys namespace that can override some tinybench internal measurements. And I do not have any better idea at the moment. |
Thanks for the reply! Are you supporting the option in the PR or the alternative solution I suggested in the comment? |
The option in the PR, looks more inline with an "natural" coding experience. My only concerns are about timestamping consistency: #329 (comment) |
How would you suggest we do this? I see that we can already customize a |
I have to check, but would it be difficult to add |
We could, but the question is how do you force users to use that version instead of any other. Unless you provide a new API that provides special values and you validate that the custom duration (or custom start/end) uses them, I don't think it's possible. |
The idea is to at least offer a way to reuse the custom or the default timestamping easily for that use case. Let's users shoot themselves in the foot if mixing timestamping methods and accuracy. I think we can assume that they have a clue on what they are doing and documentation on proper usage should be enough. |
Oh, I see. We can add a Edit: the option already exists via |
d844275 to
c83ea6e
Compare
c83ea6e to
e481a91
Compare
jerome-benoit
left a comment
There was a problem hiding this comment.
A 4.1 release will be done shortly after the merge.
Thanks.
thanks for the review, @jerome-benoit ! |
This implements a feature to allow task functions to return a custom duration to be used for the benchmark.
This is useful in cases where we want to measure a subset of the function logic, where splitting that logic into
beforeEachandafterEachwouldn't be possible or would be very inconvenient.For example, if we want to measure the time spent rendering components in React without including the time executing their effects, we would do something like: