Skip to content

NetCDF: fix constraints 2#16719

Merged
adamjstewart merged 1 commit intospack:developfrom
skosukhin:netcdf_discussion
Aug 22, 2020
Merged

NetCDF: fix constraints 2#16719
adamjstewart merged 1 commit intospack:developfrom
skosukhin:netcdf_discussion

Conversation

@skosukhin
Copy link
Copy Markdown
Member

@skosukhin skosukhin commented May 19, 2020

This PR is mainly for the discussion in #16671 and is an extension of #15950.

The advantage of merging this is that we can install all valid configurations of netcdf-c:

  1. It is possible to build netcdf-c without parallel I/O even when hdf5 is built with +mpi:

    $ spack install netcdf-c~mpi ^hdf5+mpi

    but not for versions before 4.4.0 as it should be:

    $ spack install [email protected]~mpi ^hdf5+mpi
    ==> Error: An unsatisfiable variant constraint has been detected for spec:
    
    hdf5+mpi
    
    
    while trying to concretize the partial spec:
    
        [email protected]~mpi
            ^[email protected]:
  2. It is possible to build netcdf-fortran when netcdf-c is built without NetCDF4 parallel features (i.e. ~mpi) but with parallel features for NetCDF3 (i.e. +parallel-netcdf):

    $ spack install netcdf-fortran ^netcdf-c~mpi+parallel-netcdf

The main disadvantage of this is that in order to build netcdf-c without MPI dependencies at all, one has to run:

$ spack spec -I netcdf-fortran ^netcdf-c~mpi ^hdf5~mpi`

But this can be handled in packages.yaml:

  netcdf-c:
    variants: ~mpi
  hdf5:
    variants: ~mpi

With this configuration, one can install netcdf-fortran without MPI dependencies by calling:

$ spack install netcdf-fortran

and enable MPI by calling:

$ spack install netcdf-fortran ^netcdf-c+mpi

@adamjstewart
Copy link
Copy Markdown
Member

Honestly, I wouldn't overthink this at the moment. Once the new concretizer is released (supposedly sometime this month??) then all of these hacks will be irrelevant and Spack should be smart enough to figure out the right way to do things.

@skosukhin
Copy link
Copy Markdown
Member Author

@adamjstewart Which of the modifications do you consider a hack?

@adamjstewart
Copy link
Copy Markdown
Member

I consider the previous state to be mostly hacks. I think this PR is a step in the right direction, but I can't tell what the impacts of this will be on the current concretizer since we presumably added them for a reason. Ideally, we would just add depends_on('[email protected]:+hl') and wouldn't need to specify anything else and the concretizer would simply do the right thing.

@skosukhin
Copy link
Copy Markdown
Member Author

@adamjstewart I don't know how the concretizer (the current one or the new one) would know whether we need hdf5+mpi or hdf5~mpi in a particular case if we had only depends_on('[email protected]:+hl'). I guess we need two additional depends_on statements anyway: depends_on('hdf5~mpi', when='~mpi') and depends_on('hdf5+mpi', when='+mpi'). The current two statements depends_on('[email protected]:+hl~mpi', when='~mpi') and depends_on('[email protected]:+hl+mpi', when='+mpi') are semantically the same as the three above but they are treated differently by the current concretizer. We have problems with spack spec [email protected], spack spec cdo and some other packages. So yes, there is an impact on the current concretizer but I guess that it's a positive one.

You might also have questions regarding the deleted conflicts. On top of what I have written in #15950, I would like to remind that it was me who introduced them in #5819. And it was a mistake.
First, the following should have been modified after the discussion in #5790 on whether hdf5 should have +mpi by default:

# The following doesn't work if hdf5~mpi by default and netcdf-c+mpi is
# specified in packages.yaml
# depends_on('hdf5+mpi', when='+mpi')
# Thus, we have to introduce a conflict
conflicts('+mpi', when='^hdf5~mpi',
msg='netcdf-c+mpi requires hdf5+mpi')

It should have been replaced with depends_on('hdf5+mpi', when='+mpi') from the beginning because it was decided that hdf5+mpi is what we have by default.
Second, the following is also redundant:
# The following doesn't work if hdf5+mpi by default and netcdf-c~mpi is
# specified in packages.yaml
# depends_on('hdf5~mpi', when='@:4.3~mpi')
# Thus, we have to introduce a conflict
conflicts('~mpi', when='@:4.3^hdf5+mpi',
msg='netcdf-c@:4.3~mpi requires hdf5~mpi')

It should have been simply replaced with depends_on('hdf5~mpi', when='@:4.3~mpi') because what I meant under "doesn't work" should read as "doesn't help". I didn't like the default error message of the concretizer and decided to replace it with a "better" one from the conflict statement. What I wanted to get help of the concretizer with was also rather strange: I had hdf5+mpi by default and netcdf~mpi in my packages.yaml and I wanted the concretizer to handle spack spec [email protected] but it could handle only spack spec [email protected]~mpi and this made me angry because I did have netcdf~mpi in packages.yaml. Probably affected by this anger, I messed things up. Moreover, I just checked that old version of Spack and added netcdf~mpi and hd5~mpi to packages.yaml and ran spack spec [email protected]+mpi. And it failed: "+mpi" conflicts with "netcdf^hdf5~mpi" [netcdf+mpi requires hdf5+mpi]. This is another confirmation that putting those conflicts was a bad idea and we need to remove them.

I am closing #15950 and opening this PR for merging.

@skosukhin skosukhin marked this pull request as ready for review May 22, 2020 17:11
@scheibelp scheibelp self-assigned this May 23, 2020
@adamjstewart
Copy link
Copy Markdown
Member

Going through old PRs. I'm not sure what to do about this one. Does @WardF have any comments? If not, I'm inclined to merge this since @skosukhin is a maintainer and is willing to take all the blame if anything goes wrong and submit a follow-up PR to fix anything that breaks because of it 😛

@adamjstewart adamjstewart merged commit 9fdb945 into spack:develop Aug 22, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants