add compilers to mpi setup_run_environment methods forall mpi#17015
add compilers to mpi setup_run_environment methods forall mpi#17015
Conversation
0e3f9af to
bde7e66
Compare
|
@becker33 Is this ready to merge? |
|
@bvanessen someone needs to review it before I can merge it. |
scheibelp
left a comment
There was a problem hiding this comment.
LGTM although I have a couple of suggestions.
| env.prepend_path('LD_LIBRARY_PATH', self.prefix.lib) | ||
|
|
||
| def setup_run_environment(self, env): | ||
| # Because MPI functions as a compiler we need to setup the compilers |
There was a problem hiding this comment.
It looks like you could update setup_dependent_build_environment to use this function as you have done with other packages. Not a big deal but FYI
There was a problem hiding this comment.
You can't in this case, because in the setup_run_environment case you're setting the MPI compilers based on the compiler used for the MPI spec, and in the setup_dependent_environment case you're setting it based on the dependent spec (in case you're using a different compiler from your MPI).
I could factor out a separate function that takes the spec as an input and call that from both places, but it didn't seem warranted.
|
|
||
| # Because MPI functions as a compiler, we need to treat it as one and | ||
| # add its compiler paths to the run environment. | ||
| self.setup_compiler_environment(env) |
There was a problem hiding this comment.
I generally liked the idea of defining setup_dependent_build_environment in terms of calling setup_run_environment because I assumed any environment state you'd want to apply at runtime you'd also want to apply when building a dependent. Do you happen to know if the distinct logic here shouldn't be run in setup_dependent_build_environment - I'm referring specifically to:
if 'process_managers=slurm' in self.spec:
env.set('SLURM_MPI_TYPE', 'pmi2')
There was a problem hiding this comment.
I doubt it would cause a problem, but in an entirely theoretical sense, if you were building in a slurm allocation you might not want to change slurm variables to point to a different build of slurm (the one mvapich depends on) instead of the one you're using. But that seems really unlikely
* commit '1501de59ed74802f48e32b0657fc6c95997b264a': (3648 commits) Package/py-lmfit: add new version (spack#16975) hpctoolkit: add version 2020.06.12 (spack#17081) add dependency for icd variant, or else build fails (spack#17079) clang: add 'version_argument', remove redundant method (spack#17071) New package: ocl-icd (spack#17078) Reframe 3.0 (spack#17005) py-healpy: a new package. (spack#17001) New recipe for building the Log4C package (spack#17038) fix depends issue and support for aarch64 (spack#17045) replace 'no' with 'none' as possible value of 'threads' variant (spack#17063) xrootd: new versions (spack#17076) add compilers to mpi setup_run_environment methods forall mpi implementations (spack#17015) bazel: patch to allow py-tensorflow (and likely other bazel packages) to build. (spack#17013) New package: FrontFlow Blue (spack#16901) cscope: Link tinfow instead of tinfo New package: alps (spack#17023) pygpu: fix linking with gpuarray (spack#17033) libtree package: add version 1.2.0, 1.1.4, and 1.1.3 (spack#17035) Buildcache: Fix bug in binary string replacement (spack#17075) New package: clinfo (spack#17042) ... # Conflicts: # .gitignore # lib/spack/spack/binary_distribution.py # lib/spack/spack/modules/common.py # var/spack/repos/builtin/packages/med/package.py # var/spack/repos/builtin/packages/mofem-cephas/package.py # var/spack/repos/builtin/packages/mofem-fracture-module/package.py # var/spack/repos/builtin/packages/mofem-users-modules/package.py # var/spack/repos/builtin/packages/petsc/package.py # var/spack/repos/builtin/packages/python/package.py
Make sure MPI updates its compiler names in both dependent build environments AND run environments because it functions as a compiler.