Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change interners to start preallocated with an increased capacity #137354

Merged
merged 1 commit into from
Feb 26, 2025

Conversation

FractalFir
Copy link
Contributor

@FractalFir FractalFir commented Feb 21, 2025

Inspired by #137005.

Added a with_capacity function to InternedSet. Changed the CtxtInterners to start with InternedSets preallocated with a capacity.

This does increase memory usage at very slightly(by ~1 MB at the start), altough that increase quickly disaperars for larger crates(since they require such capacity anyway).

A local perf run indicates this improves compiletimes for small crates(like ripgrep), without a negative effect on larger ones.

@rustbot
Copy link
Collaborator

rustbot commented Feb 21, 2025

r? @oli-obk

rustbot has assigned @oli-obk.
They will have a look at your PR within the next two weeks and either review your PR or reassign to another reviewer.

Use r? to explicitly pick a reviewer

@rustbot rustbot added S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. labels Feb 21, 2025
@rust-log-analyzer

This comment has been minimized.

@Kobzol
Copy link
Contributor

Kobzol commented Feb 21, 2025

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Feb 21, 2025
@bors
Copy link
Collaborator

bors commented Feb 21, 2025

⌛ Trying commit e914e37 with merge f5637ed...

bors added a commit to rust-lang-ci/rust that referenced this pull request Feb 21, 2025
[perf experiment] Changed interners to start preallocated with an increased capacity

Inspired by rust-lang#137005.

*Not meant to be merged in its current form*

Added a `with_capacity` function to `InternedSet`. Changed the `CtxtInterners` to start with `InternedSets` preallocated with a capacity.

This *does* increase memory usage at very slightly(by 1 MB at the start), altough that increase quickly disaperars for larger crates(since they require such capacity anyway).

A local perf run indicates this improves compiletimes for small crates(like `ripgrep`), without a negative effect on larger ones:
![image](https://github.com/user-attachments/assets/4a7f3317-7e61-4b28-a651-cc79ee990689)

The current default capacities are choosen somewhat arbitrarily, and are relatively low.

Depending on what kind of memory usage is acceptable, it may be beneficial to increase that capacity for some interners.

From a second local perf run(with capacity of `_type` increased to `131072`), it looks like increasing the size of the preallocated type interner has the biggest impact:
![image](https://github.com/user-attachments/assets/08ac324a-b03c-4fe9-b779-4dd35e7970d9)

What would be the maximum acceptable memory usage increase? I think most people would not mind sacrificing 1-2MB  for an improvement in compile speed, but I am curious what is the general opinion here.
@bors
Copy link
Collaborator

bors commented Feb 21, 2025

☀️ Try build successful - checks-actions
Build commit: f5637ed (f5637ed995f1c759d9c98b6ee770c89e5c9174ed)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (f5637ed): comparison URL.

Overall result: ❌✅ regressions and improvements - please read the text below

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.

mean range count
Regressions ❌
(primary)
0.8% [0.8%, 0.8%] 1
Regressions ❌
(secondary)
0.3% [0.2%, 0.4%] 11
Improvements ✅
(primary)
-0.3% [-0.4%, -0.2%] 17
Improvements ✅
(secondary)
-0.4% [-0.8%, -0.0%] 46
All ❌✅ (primary) -0.2% [-0.4%, 0.8%] 18

Max RSS (memory usage)

Results (primary -2.5%, secondary -0.5%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
1.4% [1.4%, 1.4%] 1
Regressions ❌
(secondary)
4.8% [2.4%, 7.7%] 3
Improvements ✅
(primary)
-6.5% [-6.5%, -6.5%] 1
Improvements ✅
(secondary)
-5.9% [-7.7%, -4.3%] 3
All ❌✅ (primary) -2.5% [-6.5%, 1.4%] 2

Cycles

This benchmark run did not return any relevant results for this metric.

Binary size

This benchmark run did not return any relevant results for this metric.

Bootstrap: 773.525s -> 772.968s (-0.07%)
Artifact size: 361.00 MiB -> 361.01 MiB (0.00%)

@rustbot rustbot added perf-regression Performance regression. and removed S-waiting-on-perf Status: Waiting on a perf run to be completed. labels Feb 21, 2025
@oli-obk
Copy link
Contributor

oli-obk commented Feb 21, 2025

Ideally the buffer sizes would be based on some collected data, but seems also ok to merge as is and add some FIXME for the future

@FractalFir
Copy link
Contributor Author

Ideally the buffer sizes would be based on some collected data, but seems also ok to merge as is and add some FIXME for the future

I have done a few more local perf runs, and it seems like the bigger the initial capacity the better - at least for _type (haven't tested others).

The gains start to diminish somewhere past 2^16, but I still see some improvements even as far as 2^20.

This is both good(since I see more improvements), and bad - since there is no clear local maximum, it is hard to decide at which point the higher minimum RAM increase stops being worth it.

IMHO, the best way to tune this would be to see the size of those for some commonly built crates(like syn) and ensure there is enough capacity to build them without a reallocation.

Does rust have some minimum required specs?

I don't know how much headroom I have here.

@Kobzol
Copy link
Contributor

Kobzol commented Feb 21, 2025

There are not really any minimum required specs, but if you take a look at compiling hello world, the max RSS shouldn't increase dramatically after the preallocation, IMO.

IMHO, the best way to tune this would be to see the size of those for some commonly built crates(like syn) and ensure there is enough capacity to build them without a reallocation.

That should be relatively simple to figure out, you could print the sizes of the maps before the compilation ends and then use the eprintln profiler of rustc-perf to gather the sizes across the rustc-perf benchmarks.

CC @nnethercote

@FractalFir
Copy link
Contributor Author

I have made some bigger changes to the capacites based on data from the cargo benchmark.

For each variable I tweaked, I also included a comment with some observation about the values I encountered during my tests. There are some things I have not tweaked yet(those don't have comments).

With those much larger capacites, the max RSS has increased in almost all cases, by average 13% across all the bechmarks(with the biggest increase of 23%).

Instruction count got reduced by an average 0.5 % across all the relevant results(0.2% if not-relevant results are included).

Are the RSS increases reasonable for those compiletime gains? If not, what would be the biggest acceptable RSS increase?

@Kobzol
Copy link
Contributor

Kobzol commented Feb 21, 2025

We didn't accept mimalloc even though it was ~5% faster across the board, because it regressed RSS by 15-25%, so that's a lot :) Let's see it on perf.rlo.

@bors try @rust-timer queue

While the perf. wins are kinda nice, I'm not completely sure if they are worth having a ton of magic constants in the code, especially since their optimality will necessarily change as the codebase evolves.

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Feb 21, 2025
bors added a commit to rust-lang-ci/rust that referenced this pull request Feb 21, 2025
[perf experiment] Changed interners to start preallocated with an increased capacity

Inspired by rust-lang#137005.

*Not meant to be merged in its current form*

Added a `with_capacity` function to `InternedSet`. Changed the `CtxtInterners` to start with `InternedSets` preallocated with a capacity.

This *does* increase memory usage at very slightly(by 1 MB at the start), altough that increase quickly disaperars for larger crates(since they require such capacity anyway).

A local perf run indicates this improves compiletimes for small crates(like `ripgrep`), without a negative effect on larger ones:
![image](https://github.com/user-attachments/assets/4a7f3317-7e61-4b28-a651-cc79ee990689)

The current default capacities are choosen somewhat arbitrarily, and are relatively low.

Depending on what kind of memory usage is acceptable, it may be beneficial to increase that capacity for some interners.

From a second local perf run(with capacity of `_type` increased to `131072`), it looks like increasing the size of the preallocated type interner has the biggest impact:
![image](https://github.com/user-attachments/assets/08ac324a-b03c-4fe9-b779-4dd35e7970d9)

What would be the maximum acceptable memory usage increase? I think most people would not mind sacrificing 1-2MB  for an improvement in compile speed, but I am curious what is the general opinion here.
@bors
Copy link
Collaborator

bors commented Feb 21, 2025

⌛ Trying commit 441f061 with merge c5e7aa9...

@FractalFir
Copy link
Contributor Author

We didn't accept mimalloc even though it was ~5% faster across the board, because it regressed RSS by 15-25%, so that's a lot :) Let's see it on perf.rlo.

Makes sense - with different "magic numbers", I managed to see some per improvements without a RSS regressions(on average). I'll have to see what the optimal values are there. I'll see what can be done without increasing the average RSS.

While the perf. wins are kinda nice, I'm not completely sure if they are worth having a ton of magic constants in the code, especially since their optimality will necessarily change as the codebase evolves.

That makes sense. Would having one constant and then multiplying / dividing it be better?

Eg:

const INTERN_CAP:usize = 1024;
CtxtInterners {
            arena,
            type_: InternedSet::with_capacity(INTERN_CAP*16),
            const_lists: InternedSet::with_capacity(INTERN_CAP/2),
            args: InternedSet::with_capacity(INTERN_CAP*16),
            type_lists: InternedSet::with_capacity(INTERN_CAP*8),
            region: InternedSet::with_capacity(INTERN_CAP),
            // And so on...

That could make future adjustements clearer.

Alternatively, this could be a command line / enviorment option(to allow people with more RAM to get the perf benefits), or I could just use one constant cap(eg. 1024) for all interners - altough that would be a bit wastefull.

@bors
Copy link
Collaborator

bors commented Feb 21, 2025

☀️ Try build successful - checks-actions
Build commit: c5e7aa9 (c5e7aa92e20672ed8c1de572b1279d5ec9491e04)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (c5e7aa9): comparison URL.

Overall result: ❌✅ regressions and improvements - please read the text below

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.

mean range count
Regressions ❌
(primary)
1.7% [1.0%, 2.2%] 5
Regressions ❌
(secondary)
0.9% [0.2%, 1.6%] 12
Improvements ✅
(primary)
-0.4% [-0.8%, -0.1%] 75
Improvements ✅
(secondary)
-0.5% [-1.6%, -0.1%] 71
All ❌✅ (primary) -0.2% [-0.8%, 2.2%] 80

Max RSS (memory usage)

Results (primary 15.9%, secondary 18.1%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
15.9% [3.8%, 30.9%] 268
Regressions ❌
(secondary)
18.1% [3.2%, 48.5%] 207
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 15.9% [3.8%, 30.9%] 268

Cycles

Results (primary 3.2%, secondary 5.2%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
3.2% [0.7%, 12.3%] 163
Regressions ❌
(secondary)
5.2% [1.1%, 17.4%] 152
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 3.2% [0.7%, 12.3%] 163

Binary size

This benchmark run did not return any relevant results for this metric.

Bootstrap: 773.873s -> 786.312s (1.61%)
Artifact size: 361.04 MiB -> 363.05 MiB (0.56%)

@rustbot rustbot removed the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Feb 21, 2025
@Kobzol
Copy link
Contributor

Kobzol commented Feb 21, 2025

Yeah, that went way too much in the wrong direction :)

@FractalFir FractalFir force-pushed the intern_with_cap branch 2 times, most recently from a0b66d3 to 84178df Compare February 24, 2025 23:45
@FractalFir
Copy link
Contributor Author

@nnethercote would you be down to reviewing PRs like this in the future?

I have some more changes like this(based on some local profiling), and this seems to something you might want to review.

@@ -2847,6 +2853,7 @@ impl<'tcx> TyCtxt<'tcx> {
// FIXME consider asking the input slice to be sorted to avoid
// re-interning permutations, in which case that would be asserted
// here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unintentional?

offset_of: Default::default(),
valtree: Default::default(),
// The factors have been chosen by @FractalFir based on observed interner sizes
// (obtained by printing them using `x perf eprintln --includes cargo`),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What was printed, i.e. where was the eprintln! call inserted?

Also worth mentioning that cargo is one of the larger benchmarks.

@nnethercote
Copy link
Contributor

@nnethercote would you be down to reviewing PRs like this in the future?

Sure, I've done tons of these kinds of micro-optimizations in the past :)

@FractalFir
Copy link
Contributor Author

Should be good to go now.

@FractalFir
Copy link
Contributor Author

@rustbot label -S-waiting-on-author +S-waiting-on-review

@rustbot rustbot added S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. and removed S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. labels Feb 25, 2025
@nnethercote nnethercote changed the title [perf experiment] Changed interners to start preallocated with an increased capacity Change interners to start preallocated with an increased capacity Feb 25, 2025
@nnethercote
Copy link
Contributor

Looks good. The very first comment in the PR gets used as the merge comment. Can you update it above to not say "Not meant to be merged in its current form", and maybe remove the images? r=me with that, thanks.

@bors delegate=FractalFir

@bors
Copy link
Collaborator

bors commented Feb 25, 2025

✌️ @FractalFir, you can now approve this pull request!

If @nnethercote told you to "r=me" after making some further change, please make that change, then do @bors r=@nnethercote

@FractalFir
Copy link
Contributor Author

@bors r+

@bors
Copy link
Collaborator

bors commented Feb 25, 2025

📌 Commit 7d2cfca has been approved by FractalFir

It is now in the queue for this repository.

@bors bors added S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Feb 25, 2025
@bors
Copy link
Collaborator

bors commented Feb 26, 2025

⌛ Testing commit 7d2cfca with merge ac91805...

@fmease
Copy link
Member

fmease commented Feb 26, 2025

[@]bors r+

@FractalFir The reviewer (r) should've been nnethercote, not yourself (via r=nnethercote). Just so you know

@FractalFir
Copy link
Contributor Author

[@]bors r+

@FractalFir The reviewer (r) should've been nnethercote, not yourself (via r=nnethercote). Just so you know

Sorry, my bad. I was in a bit of a rush, and did not notice I used the wrong command. Is there something I need to do now?

@fmease
Copy link
Member

fmease commented Feb 26, 2025

No worries, such things happen from time to time. I don't think we can retroactively update it without starting over. It's okay tho

@bors
Copy link
Collaborator

bors commented Feb 26, 2025

☀️ Test successful - checks-actions
Approved by: FractalFir
Pushing ac91805 to master...

@bors bors added the merged-by-bors This PR was explicitly merged by bors. label Feb 26, 2025
@bors bors merged commit ac91805 into rust-lang:master Feb 26, 2025
7 checks passed
@rustbot rustbot added this to the 1.87.0 milestone Feb 26, 2025
@rust-timer
Copy link
Collaborator

Finished benchmarking commit (ac91805): comparison URL.

Overall result: ✅ improvements - no action needed

@rustbot label: -perf-regression

Instruction count

This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.

mean range count
Regressions ❌
(primary)
0.3% [0.3%, 0.4%] 2
Regressions ❌
(secondary)
0.2% [0.2%, 0.2%] 2
Improvements ✅
(primary)
-0.3% [-0.6%, -0.2%] 39
Improvements ✅
(secondary)
-0.4% [-1.1%, -0.1%] 63
All ❌✅ (primary) -0.3% [-0.6%, 0.4%] 41

Max RSS (memory usage)

Results (primary 0.6%, secondary -1.2%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
4.2% [3.3%, 4.8%] 4
Regressions ❌
(secondary)
3.6% [2.2%, 7.8%] 7
Improvements ✅
(primary)
-4.1% [-4.5%, -3.7%] 3
Improvements ✅
(secondary)
-4.0% [-7.2%, -2.4%] 12
All ❌✅ (primary) 0.6% [-4.5%, 4.8%] 7

Cycles

This benchmark run did not return any relevant results for this metric.

Binary size

This benchmark run did not return any relevant results for this metric.

Bootstrap: 771.836s -> 771.284s (-0.07%)
Artifact size: 361.95 MiB -> 361.94 MiB (-0.00%)

@rustbot rustbot removed the perf-regression Performance regression. label Feb 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
merged-by-bors This PR was explicitly merged by bors. S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants