Skip to content

Make -j, --jobs flags definitive #17598

@haampie

Description

@haampie

Rationale

Spack handles the -j flag differently from other popular build systems like make and ninja, because it sets a hard limit for the number of build jobs to the number of cores:

# first the -j value is saved like
jobs = min(jobs, multiprocessing.cpu_count())
spack.config.set('config:build_jobs', jobs, scope='command_line')

# later it is used as
jobs = spack.config.get('config:build_jobs', 16) if pkg.parallel else 1
jobs = min(jobs, multiprocessing.cpu_count())

For reference, make, ninja, scons and ctest do not have an upper limit, and ninja seems to set the number of parallel jobs to nproc + 2 by default on my system (something currently not possible with spack):

$ make --help | grep "jobs"
  -j [N], --jobs[=N]          Allow N jobs at once; infinite jobs with no arg.
$ ninja --help 2>&1 | grep "jobs"
  -j N     run N jobs in parallel (0 means infinity) [default=18 on this system]

When it comes to the optimal number of build jobs, it seems to be common practice to have slightly more jobs than cores (like ninja does, see also https://stackoverflow.com/questions/15289250/make-j4-or-j8). I would expect to be able to enforce this in spack by specifying the -j flag, but I can't since it has an upper limit.

Notice that ninja also respects cpuset / taskset on Linux, which spack does not:

$ taskset -c 0-1 ninja --help 2>&1 | grep "jobs"
  -j N     run N jobs in parallel (0 means infinity) [default=3 on this system]

it automatically sets the number of jobs to 3 when I give it just 2 cores (so, nproc + 1 here), which is very useful.

Description

It would be nice if the -j flag would be handled differently, in this way:

  • If -j is specified by the user, simply take this value as the number of build jobs, do not limit it by the number of cpu cores.
  • If -j is not specified, take a sensible default:
    • Take min(config:build_jobs, cpus available)
    • Furthermore, on Linux, ensure that cpus available corresponds with the number of cores made available to the process through cgroups / cpuset, i.e. sched_setaffinity, such that it gets a proper default in slurm, docker, kubernetes, or people who simply use taskset (Use process cpu affinity instead of hardware specs to get cpu count #17566).

Additional information

Came up in #17566.

General information

  • I have run spack --version and reported the version of Spack
  • I have searched the issues of this repo and believe this is not a duplicate

Metadata

Metadata

Assignees

No one assigned

    Labels

    featureA feature is missing in Spack

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions