Conversation
Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
turns out we don't need that Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
whoops, my bad --- trhe queue can be empty _or_ waiting when adding a waiter Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
This reverts commit 35b8643. apparently the stupid enum really harms performance :(
i HATE that this meaningfully improves performance. i thought rust had "zero cost abstractions", wtf. Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
this might happen if it failed to consume a wakeup Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
Signed-off-by: Eliza Weisman <[email protected]>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This branch rewrites the MPSC channel wait queue implementation (again),
in order to improve performance. This undoes a decently large amount of
the perf regression from PR #20.
In particular, I've made the following changes:
both the notify and wait paths
locking) from the notify and wait operations into separate functions,
and marked them as
#[inline(always)]. If we weren't able to performthe operation without actually touching the linked list, we call into
a separate
#[inline(never)]function that actually locks the listand performs the slow path. This means that code that uses these
functions still has a function call in it, but a few instructions for
performing a CAS can be inlined and the function call avoided in some
cases. This significantly improves performance!
waitfunction intostart_wait(called the first timea node waits) and
continue_wait(called if the node is woken, tohandle spurious wakeups). This allows simplifying the code for
modifying the waker so that we don't have to pass big closures around.
variables that should have been cache padded.
Performance Comparison
These benchmarks were run against the current
mainbranch(f77d534).
async/mpsc_reusable
async/mpsc_integer
async/spsc/try_send_reusable
async/spsc/try_send_integer
I'm actually not really sure why this also improved the
try_sendbenchmarks, which don't touch the wait queue...but I'll take it!
Signed-off-by: Eliza Weisman [email protected]