Skip to content

Spack uses too many processes by default and runs out of memory #11072

@jjwilke

Description

@jjwilke

Related to #327. I know there was a lot of discussion about it in the previous issue.

Rather than just a performance optimization, this actually causes builds to fail.
Too many processes active at once leads to very strange gcc 'internal compiler error' messages. I think this is gcc running out of memory, but have not been able to confirm.

By default, Spack spins up 160 make jobs, which very quickly burns the available memory (C++ compilations involving Trilinos). Lowering the number of make jobs makes the compiler errors go away.

I believe this should be fairly easy to fix because of the use of the compiler wrapper. The compiler wrapper could check available memory on the node upon exiting with an error. If the available memory is very low, this would be a "non-fatal error" and would trigger a reduction in the number of make jobs.

The notion of a "non-fatal error" that can be diagnosed and fixed might be useful in general if there are other situations like this that might arise.

*make -> error
*diagnose and fix
*make (pick up where left off)

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions