Skip to content

Use strong_count instead of upgrade on weak Arcs in cache#459

Merged
zaharidichev merged 3 commits intomasterfrom
zd/use-strong-count
Mar 23, 2020
Merged

Use strong_count instead of upgrade on weak Arcs in cache#459
zaharidichev merged 3 commits intomasterfrom
zd/use-strong-count

Conversation

@zaharidichev
Copy link
Member

@zaharidichev zaharidichev commented Mar 18, 2020

I was trying to understand the caching code a bit more today. Unless I am missing something, it seems to me that we can use strong_count instead of trying to upgrade a Weak to determine eviction eligibility. At least in the case where we are not evicting, we swap a CAS loop and a few branches for a SeqCst load, which seems to be making a difference. Heare are the bench results. Let me know if I am missing something obvious


+++master+++
test always_evict    ... bench:      22,313 ns/iter (+/- 1,882)
test never_evict     ... bench:     116,827 ns/iter (+/- 4,593)
test sometimes_evict ... bench:      71,082 ns/iter (+/- 2,954)


+++this branch+++
test always_evict    ... bench:      24,546 ns/iter (+/- 7,296)
test never_evict     ... bench:      36,306 ns/iter (+/- 3,261)
test sometimes_evict ... bench:      29,291 ns/iter (+/- 4,164)

Signed-off-by: Zahari Dichev [email protected]

@zaharidichev zaharidichev requested review from hawkw and olix0r March 18, 2020 12:42
Copy link
Contributor

@hawkw hawkw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the change looks good to me! i had some questions about the benchmark setup, not blockers.

linkerd2-error = { path = "../linkerd/error" }
linkerd2-lock = { path = "../linkerd/lock" }
linkerd2-stack = { path = "../linkerd/stack" }
bencher = "0.1.5"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Elsewhere, we have benchmarks that are written with the libtest benchmark framework. I'm not opposed to switching to bencher, especially since it can run on stable; however, I think it would be good to pick one and use it consistently. If we decide to adopt bencher here, we should probably open a follow-up to change other benchmarks to use bencher as well.

We may also want to consider criterion rather than bencher. Although I have my doubts about the accuracy of criterion's regression detection (it seems way too sensitive to noise IMO), I think some of the additional benchmarking APIs criterion provides, like bench_with_input and iter_custom are very useful when writing benchmarks...just a thought!

@@ -0,0 +1,21 @@
[package]
name = "benches"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious about the motivation to create a new benches crate, rather than putting the cache benchmarks in linkerd/cache/benches? Not a blocker, I just want to know if there's a reason we can't do the former.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All other things equal, I think I'd prefer that the tests are "close" to the code they test.. I.e. rather than a crate for all benches, I think they should be within the crate they test. My workflow is to run cargo test -p $crate.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No particular reason really. I was looking at other crates on github and how they organize things and its was mostly 50/50. Makes sense to be closer to code. I moved it and switched to libtest. Feel free to take another look. I am not sure whether the unstable feature flag is how it should be setup.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was looking at other crates on github and how they organize things and its was mostly 50/50

In general, I think putting benchmarks in a separate crate is something that libraries will do, usually so that projects that depend on that library don't need to download test-only dependencies. In this case, I don't think it's necessary, since there aren't any downstream consumers of linkerd-internal crates.

@hawkw
Copy link
Contributor

hawkw commented Mar 18, 2020

CI failure doesn't seem related, I'm giving it another try.

@olix0r
Copy link
Member

olix0r commented Mar 18, 2020

@hawkw the test certs are expired and need to be updated on master

@zaharidichev zaharidichev requested a review from hawkw March 19, 2020 09:25
Copy link
Contributor

@hawkw hawkw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One last note on benchmark structure.

Comment on lines 13 to 14
#[cfg(all(feature = "unstable", test))]
mod benches;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of a module, let's put the benchmarks in a separate benches directory inside the cache crate? That's generally considered more conventional, and would let ets us run them with cargo bench, as well as removing the necessity of an unstable feature flag.

Signed-off-by: Zahari Dichev <[email protected]>
@zaharidichev zaharidichev requested a review from hawkw March 23, 2020 16:37
@zaharidichev zaharidichev merged commit adf8e64 into master Mar 23, 2020
olix0r added a commit to linkerd/linkerd2 that referenced this pull request Mar 31, 2020
This release restores the `route_actual_response_total` metric, which is
needed for `linkerd routes -o wide`.

---

* Update test certificates (linkerd/linkerd2-proxy#460)
* Use strong_count instead of upgrade on weak Arcs in cache (linkerd/linkerd2-proxy#459)
* Wire authority override coming from discovery (linkerd/linkerd2-proxy#462)
* Update integration tests certs (linkerd/linkerd2-proxy#465)
* Add a `mock-orig-dst` feature flag (linkerd/linkerd2-proxy#466)
* http-metrics: Make latency export optional (linkerd/linkerd2-proxy#467)
* Restore the route_actual_response_total metric (linkerd/linkerd2-proxy#468)
olix0r added a commit to linkerd/linkerd2 that referenced this pull request Mar 31, 2020
This release restores the `route_actual_response_total` metric, which is
needed for `linkerd routes -o wide`.

---

* Update test certificates (linkerd/linkerd2-proxy#460)
* Use strong_count instead of upgrade on weak Arcs in cache (linkerd/linkerd2-proxy#459)
* Wire authority override coming from discovery (linkerd/linkerd2-proxy#462)
* Update integration tests certs (linkerd/linkerd2-proxy#465)
* Add a `mock-orig-dst` feature flag (linkerd/linkerd2-proxy#466)
* http-metrics: Make latency export optional (linkerd/linkerd2-proxy#467)
* Restore the route_actual_response_total metric (linkerd/linkerd2-proxy#468)
@olix0r olix0r deleted the zd/use-strong-count branch May 25, 2021 15:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants