Skip to content

Allow using -j to control the parallelism of concretization#37608

Merged
tgamblin merged 1 commit intospack:developfrom
alalazo:bugfix/control_parallelism_of_concretization_from_cli
May 11, 2023
Merged

Allow using -j to control the parallelism of concretization#37608
tgamblin merged 1 commit intospack:developfrom
alalazo:bugfix/control_parallelism_of_concretization_from_cli

Conversation

@alalazo
Copy link
Copy Markdown
Member

@alalazo alalazo commented May 11, 2023

fixes #29464

This PR allows to use

$ spack concretize -j X

to set a cap on the parallelism of concretization from the command line

fixes spack#29464

This PR allows to use
```
$ spack concretize -j X
```
to set a cap on the parallelism of concretization from the command line
@spackbot-app spackbot-app bot added commands core PR affects Spack core functionality environments labels May 11, 2023
@alalazo alalazo added this to the v0.20.0 milestone May 11, 2023
@alalazo alalazo requested a review from haampie May 11, 2023 12:24
Copy link
Copy Markdown
Member

@alecbcs alecbcs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome! Great to see this getting added!

max_processes = min(len(arguments), 16) # Number of specs # Cap on 16 cores
max_processes = min(
len(arguments), # Number of specs
spack.config.get("config:build_jobs"), # Cap on build jobs
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure we should still be taking the min of the two arguments here? If a user declares -j 16 but only specifies a single spec they might be surprised to see clingo only using a single thread no? Or is this not controlling threads to clingo?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clingo is currently single threaded. We checked in the early days and using parallelism at that level was detrimental to performance. Parallelism is obtained by spawning multiple processes, each of which is concretizing an independent spec.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reference for using a single thread:

control.configuration.solve.parallel_mode = "1"

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corollary to this is if we switch to when_possible in our CI environments, we're back to basically one thread. IMO this is fine, as it'll still (hopefully) be faster b/c it's a single solve. But the win with parallelism here is b/c we're concretizing so many things separately.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clingo is currently single threaded. We checked in the early days and using parallelism at that level was detrimental to performance. Parallelism is obtained by spawning multiple processes, each of which is concretizing an independent spec.

Gotcha! Thanks for the explanation @alalazo! I was definitely confused on the meaning of processes here.

@tgamblin tgamblin merged commit 5c7dda7 into spack:develop May 11, 2023
@haampie
Copy link
Copy Markdown
Member

haampie commented May 11, 2023

Bit too late, but this should use determine_number_of_jobs(parallel=True) so that it's also capped by cgroups / nproc. I can make a separate PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

commands core PR affects Spack core functionality environments

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add ability to configure the default max_processes upper limit value without locally patching spack

4 participants