Skip to content

Add ability to configure the default max_processes upper limit value without locally patching spack #29464

@mpbelhorn

Description

@mpbelhorn

Summary

It would be useful on OLCF machines to be able to configure the max number of processes used to solve an environment.

Currently, spack chooses a maximum number of processing threads that is the lesser of the number of specs to solve or 16. On OLCF resource login nodes, our admins use cgroup rules to limit the amount of memory and process walltime any individual user can consume. When Spack defaults to using 16 process, it is often possible for the 16 clingo solver threads to consume more memory than is available to a user under cgroup limitations. This causes the solver threads to be violently killed and hangs spack in the concretization phase.

Rather than changing https://github.com/spack/spack/blob/develop/lib/spack/spack/environment/environment.py#L1211-L1214 to a lower fixed upper limit literal in our local forks of spack, I would propose that the max threads used when solving an environment adopt the same upper limit value as spack.config.build_jobs. We generally already set build_jobs to a lower value than 16 (or even the available number of CPUs) to avoid exceeding cgroup memory limitations. However, a separate, new configuration value would also be acceptable for our purposes.

Rationale

No response

Description

No response

Additional information

$ spack --version  
0.17.1-1474-ef75fe153b

Possibly related to:

General information

  • I have run spack --version and reported the version of Spack
  • I have searched the issues of this repo and believe this is not a duplicate

Metadata

Metadata

Assignees

Labels

featureA feature is missing in Spack

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions