Skip to content

Remove hard-coded standard C++ library selection and add more releases in llvm package#19933

Merged
adamjstewart merged 2 commits intospack:developfrom
ye-luo:llvm-clang_cxx_stdlib
Jan 6, 2021
Merged

Remove hard-coded standard C++ library selection and add more releases in llvm package#19933
adamjstewart merged 2 commits intospack:developfrom
ye-luo:llvm-clang_cxx_stdlib

Conversation

@ye-luo
Copy link
Copy Markdown
Contributor

@ye-luo ye-luo commented Nov 16, 2020

I'd like to add a new variant to allow using libstdc++ as default by the clang compiler even when libcxx is built.

@ye-luo
Copy link
Copy Markdown
Contributor Author

ye-luo commented Nov 16, 2020

@naromero77 what is the formatting rule/tool to fix the formatting CI failure? fixed

@naromero77
Copy link
Copy Markdown
Contributor

@trws Can you take a look at this one?

@trws
Copy link
Copy Markdown
Contributor

trws commented Nov 17, 2020

This is something I could see being useful, but I'd prefer to have the invariants expressed in it. The main one is that you can't have the stdlib option set to libc++ if the variant to build libc++ isn't specified, as of now it would try to build it and probably fail in cmake or, worse, find one from some other build.

To your point @naromero77 I agree we're reaching a pretty heavy saturation on variants. It might be worth taking a pass over them and seeing if they're all still relevant. This one I'm in favor of if only because the behavior this offers is something we actually do on our deployed clang builds.

@ye-luo
Copy link
Copy Markdown
Contributor Author

ye-luo commented Nov 17, 2020

The main one is that you can't have the stdlib option set to libc++ if the variant to build libc++ isn't specified, as of now it would try to build it and probably fail in cmake or, worse, find one from some other build.

I think the logic I put up is exactly what you expected, if +libcxx is not requested, clang_cxx_stdlib is completely ignored as before. Maybe I should make it more clear in the documentation.

@ye-luo ye-luo force-pushed the llvm-clang_cxx_stdlib branch from 6671a99 to de5e1ad Compare November 17, 2020 02:32
@ye-luo
Copy link
Copy Markdown
Contributor Author

ye-luo commented Nov 17, 2020

This one I'm in favor of if only because the behavior this offers is something we actually do on our deployed clang builds.

In almost all the desktop or HPC systems I used in the past, libstdc++ is the default since it is the default configuration choice when clang is built.

@trws
Copy link
Copy Markdown
Contributor

trws commented Nov 17, 2020

I think the logic I put up is exactly what you expected, if +libcxx is not requested, clang_cxx_stdlib is completely ignored as before.

This is not quite right. As of now, it's an error not ignored. I consider it to be an error to build a package with a variant saying it's using libcxx if libcxx is unavailable.

@ye-luo
Copy link
Copy Markdown
Contributor Author

ye-luo commented Nov 17, 2020

I think the logic I put up is exactly what you expected, if +libcxx is not requested, clang_cxx_stdlib is completely ignored as before.

This is not quite right. As of now, it's an error not ignored.

Probably I don't understand spack enough and got more and more confused... Did you mean
~libcxx clang_cxx_stdlib=not_default should result in an error instead of getting clang_cxx_stdlib ignored?

I consider it to be an error to build a package with a variant saying it's using libcxx if libcxx is unavailable.

could you be more specific, I don't quite understand what you mean here.

@ye-luo
Copy link
Copy Markdown
Contributor Author

ye-luo commented Nov 17, 2020

as of now it would try to build it and probably fail in cmake or, worse, find one from some other build.

I don't get why it would try to build it?

@trws
Copy link
Copy Markdown
Contributor

trws commented Nov 17, 2020 via email

@trws
Copy link
Copy Markdown
Contributor

trws commented Nov 17, 2020 via email

@ye-luo
Copy link
Copy Markdown
Contributor Author

ye-luo commented Nov 17, 2020

The current behavior is

+libcxx
+libcxx clang_cxx_stdlib=default
+libcxx clang_cxx_stdlib=libc++
# builds libcxx and set default C++ standard library to libc++
+libcxx clang_cxx_stdlib=libstdc++
# builds libcxx and set default C++ standard library to libstdc++
~libcxx
~libcxx clang_cxx_stdlib=whatever_value
# libcxx is not built and clang default C++ standard library is set to libstdc++

I don't use mac so I cannot say much.
My understanding is llvm package doesn't support a gcc tool chain from OS
So it is very rare that c++ is not enabled when gcc is built. So I assume libstdc++ is always available.

@@ -418,8 +427,12 @@ def cmake_args(self):
if "+libcxx" in spec:
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@trws are you saying this protection is not enough for clang_cxx_stdlib?

description="C++ standard library used by clang++. "
"Default is libc++ when +libcxx or libstdc++ otherwise. "
"The selection is effective only if +libcxx is specified",
values=("default", "libstdc++", "libc++"),
Copy link
Copy Markdown
Contributor Author

@ye-luo ye-luo Nov 17, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should I assume spack check input values against reference values at the first place?

@naromero77
Copy link
Copy Markdown
Contributor

I just want to double-check that I understand the allowable configurations:

  • Use libstdc++ (nothing to build).
    • Conflict for darwin is needed ??? Not familiar with Mac either
  • Build libc++ and set libstdc++ as default
  • Build libc++ and set libc++ as default

If these are the only allowable options, perhaps we should create a single multi-valued variant? It seems like having the two variants leads to an over complete set of options.

@trws
Copy link
Copy Markdown
Contributor

trws commented Nov 17, 2020 via email

@naromero77
Copy link
Copy Markdown
Contributor

OK, if there are two CMake options, then I would be OK with the two variants.

A few more questions:

  • Does the clang_cxx_stdlib multi-valued variant really need a default option? There seems to be only two options libc++ or libstdc++ and some hard conflicts with the +libcxx variant. And if there are only two options, then it could just be a true/false-style variant.
  • The naming here is a bit confusing +libcxx is the subproject that enables the creation of libc++, correct?

@trws
Copy link
Copy Markdown
Contributor

trws commented Nov 18, 2020

It could be true/false in principle, but I'm not sure there are necessarily only two values (there probably are for now, but clang also supports Microsofts stl, which may at some point come up or some other). I'm not sure I'd have a "default" as such as much as set the default to one of the libraries based on other variant selections personally, and then just have libstdc++ and libc++ as the variant values.

Yes, libcxx is the subproject that builds libcxx. If that is built, the new clang defaults (in some cases anyway) to using that new libcxx as its c++ standard library rather than the one it was built with.

@naromero77
Copy link
Copy Markdown
Contributor

naromero77 commented Nov 20, 2020

So there would only be two values for the mult-valued variant. I think the changes look something like this:

  variant(
        "clang_cxx_stdlib",
        default="libc++",
        description="Default C++ standard library used by clang++.",
        values=("libstdc++", "libc++"),

and we default to libc++ since we also default to libcxx=True

Plus we add a conflict:
conflicts("clang_cxx_stdlib=libc++", when="~libcxx")

@ye-luo
Copy link
Copy Markdown
Contributor Author

ye-luo commented Nov 20, 2020

when +libcxx, clang_cxx_stdlib needs default libc++
when ~libcxx, clang_cxx_stdlib needs default libstdc++
I need a conditional default value. That is why I added this default option because I'm not aware of spack having such a feature.
Having a conflict is not user friendly.

@naromero77
Copy link
Copy Markdown
Contributor

naromero77 commented Nov 20, 2020

@adamjstewart @alalazo I still don't think its possible to have a conditional default value?

@ye-luo Yes, a conditional default value is better than a conflict.

But this is also undesirable behaviour IMO, the variants are normally uniquely defined.

+libcxx clang_cxx_stdlib=default
+libcxx clang_cxx_stdlib=libc++
# builds libcxx and set default C++ standard library to libc++
+libcxx clang_cxx_stdlib=libstdc++
# builds libcxx and set default C++ standard library to libstdc++
~libcxx
~libcxx clang_cxx_stdlib=whatever_value
# libcxx is not built and clang default C++ standard library is set to libstdc++

You end up having to do:
spack install llvm~libcxx clang_cxx_stdlib=libstdc++

Not user friendly, but a limitation that we have to live with in Spack for now.

@naromero77
Copy link
Copy Markdown
Contributor

naromero77 commented Nov 20, 2020

Sometimes to make it a bit more user friendly, I will do

conflicts("clang_cxx_stdlib=libc++", 
               when="~libcxx",
               msg = "+libcxx is needed to use libc++ as the C++ standard library or set clang_cxx_stdlib=libstdc++")

@adamjstewart
Copy link
Copy Markdown
Member

No conditional default variants, aside from OS-specific things.

@trws
Copy link
Copy Markdown
Contributor

trws commented Dec 7, 2020

So, where are we stuck here? I'm leaning toward having the default be to have the default based on the OS, Mac gets libc++ and everything else gets libstdc++ by default. Then include the conflict so a user can tell when they've asked for something that will not work. @adamjstewart will that work with what spack can provide right now?

@adamjstewart
Copy link
Copy Markdown
Member

If the only options are libc++ (macOS) and libstdc++ (everything else) then why do we need a variant at all?

@ye-luo
Copy link
Copy Markdown
Contributor Author

ye-luo commented Jan 6, 2021

@trws Do you have merge rights or need the admin to do the merge?

@trws
Copy link
Copy Markdown
Contributor

trws commented Jan 6, 2021

I do not have merge rights, but I agree this works, thanks! @adamjstewart could we get a merge here?

@trws
Copy link
Copy Markdown
Contributor

trws commented Jan 6, 2021

Thanks!

bollig pushed a commit to bollig/spack that referenced this pull request Jan 12, 2021
…s in llvm package (spack#19933)

* Restore OS based Clang default choice of C++ standard library.

* Add LLVM 11.0.1 release
loulawrence pushed a commit to loulawrence/spack that referenced this pull request Jan 19, 2021
…s in llvm package (spack#19933)

* Restore OS based Clang default choice of C++ standard library.

* Add LLVM 11.0.1 release
tldahlgren pushed a commit to tldahlgren/spack that referenced this pull request Feb 18, 2021
…s in llvm package (spack#19933)

* Restore OS based Clang default choice of C++ standard library.

* Add LLVM 11.0.1 release
likask added a commit to likask/spack that referenced this pull request Feb 27, 2021
…spack_v0.16.1

* commit '8dd2d740b1fbd4335209240fcc42826d0a143f57': (79 commits)
  Update CHANGELOG and release version
  Resolve (post-cherry-picking) flake8 errors
  apple-clang: add correct path to compiler wrappers (spack#21662)
  intel-oneapi-compilers/mpi: add module support (spack#20808)
  intel-oneapi-compilers: add  to LD_LIBRARY_PATH so that it finds libimf.so (spack#20717)
  adding environment to OneMKL packages so that examples will build (spack#21377)
  add intel oneapi to compiler/pkg translations (spack#21448)
  llvm: "master" branch is now "main" branch (spack#21411)
  Print groups properly for spack find -d (spack#20028)
  store sbang_install_path in buildinfo, use for subsequent relocation (spack#20768)
  [WIP] relocate.py: parallelize test replacement logic (spack#19690)
  py-hovorod: fix typo on variant name in conflicts directive (spack#20906)
  concretizer: require at least a dependency type to say the dependency holds
  concretizer: dependency conditions cannot hold if package is external
  libyogrt: remove conflicts triggered by an invalid value (spack#20794)
  restore ability of dev-build to skip patches (spack#20351)
  intel-oneapi-mpi: virtual provider support (spack#20732)
  intel-oneapi-compilers package: correct module file (spack#20686)
  fix mpi lib paths, add virtual provides (spack#20693)
  Remove hard-coded standard C++ library selection and add more releases in llvm package (spack#19933)
  ...
matz-e added a commit to BlueBrain/spack that referenced this pull request Jul 30, 2021
* py-ipykernel: fix install (#19617)

There is a post-install routine in `ipykernel` that needs to be
called for proper registration with jupyter.

* hip support for umpire, chai, raja, camp (#19715)

* create HipPackage base class and do some refactoring

* comments and added conflict to raja for openmp with hip

* fix error handling for spack test results command (#19987)

* py-ipykernel: fix bug in phase method (#19986)

* py-ipykernel: fix bug in phase method

* Fix bug in executable calling

* recognize macOS 11.1 as big sur (#20038)

Big Sur versions go 11.0, 11.0.1, 11.1 (vs. prior versions that
only used the minor component)

Co-authored-by: Todd Gamblin <[email protected]>

* Docs: remove duplication in Command Reference (#20021)

* concretizer: treat conditional providers correctly (#20086)

refers #20040

This modification emits rules like:

provides_virtual("netlib-lapack","blas") :- variant_value("netlib-lapack","external-blas","False").

for packages that provide virtual dependencies conditionally instead
of a fact that doesn't account for the condition.

* concretizer: allow a bool to be passed as argument for tests dependencies (#20082)

refers #20079

Added docstrings to 'concretize' and 'concretized' to
document the format for tests.

Added tests for the activation of test dependencies.

* concretizer: prioritize matching compilers over newer versions (#20020)

fixes #20019

Before this modification having a newer version of a node came
at higher priority in the optimization than having matching
compilers. This could result in unexpected configurations for
packages with conflict directives on compilers of the type:

conflicts('%[email protected]:', when='@:A.B')

where changing the compiler for just that node is preferred to
lower the node version to less than 'A.B'. Now the priority has
been switched so the solver will try to lower the version of the
nodes in question before changing their compiler.

* concretizer: treat target ranges in directives correctly (#19988)

fixes #19981

This commit adds support for target ranges in directives,
for instance:

conflicts('+foo', when='target=x86_64:,aarch64:')

If any target in a spec body is not a known target the
following clause will be emitted:

node_target_satisfies(Package, TargetConstraint)

when traversing the spec and a definition of
the clause will then be printed at the end similarly
to what is done for package and compiler versions.

* Typos: add missing closing parens (#20174)

* concretizer: swap priority of selecting provider and default variant (#20182)

refers #20040

Before this PR optimization rules would have selected default
providers at a higher priority than default variants. Here we
swap this priority and we consider variants that are forced by
any means (root spec or spec in depends_on clause) the same as
if they were with a default value.

This prevents the solver from avoiding expected configurations
just because they contain directives like:

depends_on('pkg+foo')

and `+foo` is not the default variant value for pkg.

* concretizer: remove ad-hoc rule for external packages (#20193)

fixes #20040

Matching compilers among nodes has been prioritized
in #20020. Selection of default variants has been
tuned in #20182. With this setup there is no need
to have an ad-hoc rule for external packages. On
the contrary it should be removed to prefer having
default variant values over more external nodes in
the DAG.

* spec: return early from concretization if a spec is already concrete (#20196)

* Fixes compile time errors (#20006)

Co-authored-by: michael laufer <[email protected]>

* concretizer: don't optimize emitting version_satisfies() (#20128)

When all versions were allowed a version_satisfies rule was not emitted,
and this caused conditional directives to fail.

* boost: disable find_package's config mode for boost prior to v1.70.0 (#20198)

* Fix hipcc once more (#20095)

* concretizer: try hard to infer the real version of compilers (#20099)

fixes #20055

Compiler with custom versions like gcc@foo are not currently
matched to the appropriate targets. This is because the
version of spec doesn't match the "real" version of the
compiler.

This PR replicates the strategy used in the original
concretizer to deal with that and tries to detect the real
version of compilers if the version in the spec returns no
results.

* concretizer: call inject_patches_variants() on the roots of the specs (#20203)

As was done in the old concretizer. Fixes an issue where conditionally
patched dependencies did not show up in spec (gdal+jasper)

* avoid circular import (#20236)

* environment installs: fix reporting. (#20004)

PR #15702 changed the invocation of the report context when installing
specs, do the same when building environments.

* concretizer: restrict maximizing variant values to MV variants (#20194)

* concretizer: each external version is allowed by definition (#20247)

Registering external versions among the lists of allowed ones
generates the correct rules for `version_satisfies`

* VTK-m: update to specify correct requirements to kokkos (#20097)

* concretizer: refactor handling of special variants dev_build and patches

Other parts of the concretizer code build up lists of things we can't
know without traversing all specs and packages, and they output these
list at the very end.

The code for this for variant values from spec literals was intertwined
with the code for traversing the input specs. This only covers the input
specs and misses variant values that might come from directives in
packages.

- [x] move ad-hoc value handling code into spec_clauses so we do it in
  one place for CLI and packages

- [x] move handling of `variant_possible_value`, etc. into
  `concretize.lp`, where we can automatically infer variant existence
  more concisely.

- [x] simplify/clarify some of the code for variants in `spec_clauses()`

* bugfix: work around issue handling packages not in any repo

* concretizer: try hard to obtain all needed variant_possible_value()'s (#20102)

Track all the variant values mentioned when emitting constraints, validate them
and emit a fact that allows them as possible values.

This modification ensures that open-ended variants (variants accepting any string 
or any integer) are projected to the finite set of values that are relevant for this 
concretization.

* Tests: enable re-use of post-install tests in smoke tests (#20298)

* concretizer: remove clingo command-line driver (#20362)

I was keeping the old `clingo` driver code around in case we had to run
using the command line tool instad of through the Python interface.

So far, the command line is faster than running through Python, but I'm
working on fixing that.  I found that if I do this:

```python
control = clingo.Control()
control.load("concretize.lp")
control.load("hdf5.lp")       # code from spack solve --show asp hdf5
control.load("display.lp")

control.ground([("base", [])])
control.solve(...)
```

It's just as fast as the command line tool. So we can always generate the
code and load it manually if we need to -- we don't need two drivers for
clingo. Given that the python interface is also the only way to get unsat
cores, I think we pretty much have to use it.

So, I'm removing the old command line driver and other unused code. We
can dig it up again from the history if it is needed.

* package sanity: ensure all variant defaults are allowed values (#20373)

* concretizer: don't use one_of_iff for range constraints (#20383)

Currently, version range constraints, compiler version range constraints,
and target range constraints are implemented by generating ground rules
from `asp.py`, via `one_of_iff()`.  The rules look like this:

```
version_satisfies("python", "2.6:") :- 1 { version("python", "2.4"); ... } 1.
1 { version("python", "2.4"); ... } 1. :- version_satisfies("python", "2.6:").
```

So, `version_satisfies(Package, Constraint)` is true if and only if the
package is assigned a version that satisfies the constraint. We
precompute the set of known versions that satisfy the constraint, and
generate the rule in `SpackSolverSetup`.

We shouldn't need to generate already-ground rules for this. Rather, we
should leave it to the grounder to do the grounding, and generate facts
so that the constraint semantics can be defined in `concretize.lp`.

We can replace rules like the ones above with facts like this:

```
version_satisfies("python", "2.6:", "2.4")
```

And ground them in `concretize.lp` with rules like this:

```
1 { version(Package, Version) : version_satisfies(Package, Constraint, Version) } 1
  :- version_satisfies(Package, Constraint).
version_satisfies(Package, Constraint)
  :- version(Package, Version), version_satisfies(Package, Constraint, Version).
```

The top rule is the same as before. It makes conditional dependencies and
other places where version constraints are used work properly. Note that
we do not need the cardinality constraint for the second rule -- we
already have rules saying there can be only one version assigned to a
package, so we can just infer from `version/2` `version_satisfies/3`.
This form is also safe for grounding -- If we used the original form we'd
have unsafe variables like `Constraint` and `Package` -- the original
form only really worked when specified as ground to begin with.

- [x] use facts instead of generating rules for package version constraints
- [x] use facts instead of generating rules for compiler version constraints
- [x] use facts instead of generating rules for target range constraints
- [x] remove `one_of_iff()` and `iff()` as they're no longer needed

* Fix comparisons for abstract specs (#20341)

bug only relevant for python3

* unit-tests: ensure that installed packages can be reused (#20307)

refers #20292

Added a unit test that ensures we can reuse installed
packages even if in the repository variants have been
removed or added.

* ci: fixes for compiler bootstrapping (#17563)

This PR addresses a number of issues related to compiler bootstrapping.

Specifically:
1. Collect compilers to be bootstrapped while queueing in installer
Compiler tasks currently have an incomplete list in their task.dependents,
making those packages fail to install as they think they have not all their
dependencies installed. This PR collects the dependents and sets them on
compiler tasks.

2. allow boostrapped compilers to back off target
Bootstrapped compilers may be built with a compiler that doesn't support
the target used by the rest of the spec.  Allow them to build with less
aggressive target optimization settings.

3. Support for target ranges
Backing off the target necessitates computing target ranges, so make Spack
handle those properly.  Notably, this adds an intersection method for target
ranges and fixes the way ranges are satisfied and constrained on Spec objects.

This PR also:
- adds testing
- improves concretizer handling of target ranges

Co-authored-by: Harmen Stoppels <[email protected]>
Co-authored-by: Gregory Becker <[email protected]>
Co-authored-by: Massimiliano Culpo <[email protected]>

* asp: memoize the list of all target_specs to speed-up setup phase (#20473)

* asp: memoize the list of all target_specs to speed-up setup phase

* asp: memoize using a cache per solver object

* concretizer: add #defined statements to avoid warnings.

`version_satisfies/2` and `node_compiler_version_satisfies/3` are
generated but need `#defined` directives to avoid " info: atom does not
occur in any rule head:" warnings.

* concretizer: pull _develop_specs_from_env out of main setup loop

* concretizer: spec_clauses should traverse dependencies

There are currently no places where we do not want to traverse
dependencies in `spec_clauses()`, so simplify the logic by consolidating
`spec_traverse_clauses()` with `spec_clauses()`.

* concretizer: move conditional dependency logic into `concretize.lp`

Continuing to convert everything in `asp.py` into facts, make the
generation of ground rules for conditional dependencies use facts, and
move the semantics into `concretize.lp`.

This is probably the most complex logic in Spack, as dependencies can be
conditional on anything, and we need conditional ASP rules to accumulate
and map all the dependency conditions to spec attributes.

The logic looks complicated, but essentially it accumulates any
constraints associated with particular conditions into a fact associated
with the condition by id. Then, if *any* condition id's fact is True, we
trigger the dependency.

This simplifies the way `declared_dependency()` works -- the dependency
is now declared regardless of whether it is conditional, and the
conditions are handled by `dependency_condition()` facts.

* concretizer: avoid redundant grounding on dependency types

* concretizer: emit facts for constraints on imposed dependencies

* concretizer: emit facts for integrity constraints

* concretizer: fix failing unit tests

* concretizer: optimized loop on node platforms

We can speed-up the computation by avoiding a
double loop in a cardinality constraint and
enforcing the rule instead as an integrity
constraint.

* concretizer: optimize loop on compiler version

Similar to the optimization on platform

* concretizer: refactor conditional rules to be less repetitious (#20507)

We have to repeat all the spec attributes in a number of places in
`concretize.lp`, and Spack has a fair number of spec attributes. If we
instead add some rules up front that establish equivalencies like this:

```
    node(Package) :- attr("node", Package).
    attr("node", Package) :- node(Package).

    version(Package, Version) :- attr("version", Package, Version).
    attr("version", Package, Version) :- version(Package, Version).
```

We can rewrite most of the repetitive conditions with `attr` and repeat
only for each arity (there are only 3 arities for spec attributes so far)
as opposed to each spec attribute. This makes the logic easier to read
and the rules easier to follow.

Co-authored-by: Massimiliano Culpo <[email protected]>

* Add Intel oneAPI packages (#20411)

This creates a set of packages which all use the same script to install
components of Intel oneAPI. This includes:

* An inheritable IntelOneApiPackage which knows how to invoke the
  installation script based on which components are requested
* For components which include headers/libraries, an inheritable
  IntelOneApiLibraryPackage is provided to locate them
* Individual packages for DAL, DNN, TBB, etc.
* A package for the Intel oneAPI compilers (icx/ifx). This also includes
  icc/ifortran but these are not currently detected in this PR

* bugfix: do not write empty default dicts/lists in envs (#20526)

Environment yaml files should not have default values written to them.

To accomplish this, we change the validator to not add the default values to yaml. We rely on the code to set defaults for all values (and use defaulting getters like dict.get(key, default)).

Includes regression test.

* concretizer: generate facts for externals

Generate only facts for external specs. Substitute the
use of already grounded rules with non-grounded rules
in concretize.lp

* bugfix: infinite loop when building a set from incomplete specs (#20649)

This code in `SpecBuilder.build_specs()` introduced in #20203, can loop
seemingly interminably for very large specs:

```python
set([spec.root for spec in self._specs.values()])
```

It's deceptive, because it seems like there must be an issue with
`spec.root`, but that works fine. It's building the set afterwards that
takes forever, at least on `r-rminer`. Currently if you try running
`spack solve r-rminer`, it loops infinitely and spins up your fan.

The issue (I think) is that the spec is not yet complete when this is
run, and something is going wrong when constructing and comparing so many
values produced by `_cmp_key()`. We can investigate the efficiency of
`_cmp_key()` separately, but for now, the fix is:

```python
roots = [spec.root for spec in self._specs.values()]
roots = dict((id(r), r) for r in roots)
```

We know the specs in `self._specs` are distinct (they just came out of
the solver), so we can just use their `id()` to unique them here. This
gets rid of the infinite loop.

* concretizer: more detailed section headers in concretize.lp

* concretizer: make _condtion_id_counter an iterator

* concretizer: consolidate handling of virtuals into spec_clauses

* concretizer: convert virtuals to facts; move all rules to `concretize.lp`

This converts the virtual handling in the new concretizer from
already-ground rules to facts. This is the last thing that needs to be
refactored, and it converts the entire concretizer to just use facts.

The previous way of handling virtuals hinged on rules involving
`single_provider_for` facts that were tied to the virtual and a version
range. The new method uses the condition pattern we've been using for
dependencies, externals, and conflicts.

To handle virtuals as conditions, we impose constraints on "fake" virtual
specs in the logic program. i.e., `version_satisfies("mpi", "2.0:",
"2.0")` is legal whereas before we wouldn't have seen something like
this. Currently, constriants are only handled on versions -- we don't
handle variants or anything else yet, but they key change here is that we
*could*. For a long time, virtual handling in Spack has only dealt with
versions, and we'd like to be able to handle variants as well. We could
easily add an integrity constraint to handle variants like the one we use
for versions.

One issue with the implementation here is that virtual packages don't
actually declare possible versions like regular packages do. To get
around that, we implement an integrity constraint like this:

    :- virtual_node(Virtual),
       version_satisfies(Virtual, V1), version_satisfies(Virtual, V2),
       not version_constraint_satisfies(Virtual, V1, V2).

This requires us to compare every version constraint to every other, both
in program generation and within the concretizer -- so there's a
potentially quadratic evaluation time on virtual constraints because we
don't have a real version to "anchor" things to. We just say that all the
constraints need to agree for the virtual constraint to hold.

We can investigate adding synthetic versions for virtuals in the future,
to speed this up.

* concretizer: remove rule generation code from concretizer

Our program only generates facts now, so remove all unused code related
to generating cardinality constraints and rules.

* concretizer: simplify handling of virtual version constraints

Previously, the concretizer handled version constraints by comparing all
pairs of constraints and ensuring they satisfied each other. This led to
INCONSISTENT ressults from clingo, due to ambiguous semantics like:

    version_constraint_satisfies("mpi", ":1", ":3")
    version_constraint_satisfies("mpi", ":3", ":1")

To get around this, we introduce possible (fake) versions for virtuals,
based on their constraints. Essentially, we add any Versions,
VersionRange endpoints, and all such Versions and endpoints from
VersionLists to the constraint. Virtuals will have one of these synthetic
versions "picked" by the solver. This also allows us to remove a special
case from handling of `version_satisfies/3` -- virtuals now work just
like regular packages.

* concretizer: use consistent naming for compiler predicates (#20677)

Every other predicate in the concretizer uses a `_set` suffix to
implement user- or package-supplied settings, but compiler settings use a
`_hard` suffix for this. There's no difference in how they're used, so
make the names the same.

- [x] change `node_compiler_hard` to `node_compiler_set`
- [x] change `node_compiler_version_hard` to `node_compiler_version_set`

* concretizer: make rules on virtual packages more linear

fixes #20679

In this refactor we have a single cardinality rule on the
provider, which triggers a rule transforming a dependency
on a virtual package into a dependency on the provider of
the virtual.

* Remove hard-coded standard C++ library selection and add more releases in llvm package (#19933)

* Restore OS based Clang default choice of C++ standard library.

* Add LLVM 11.0.1 release

* fix mpi lib paths, add virtual provides (#20693)

* intel-oneapi-compilers package: correct module file (#20686)

This properly sets PATH/CPATH/LIBRARY_PATH etc. to make the
Spack-generated module file for intel-oneapi-compilers useful
(without this, 'icx' would not be found after loading the module
file for intel-oneapi-compilers).

* intel-oneapi-mpi: virtual provider support (#20732)

Set up environment and dependent packages properly when building
with intel-oneapi-mpi as a dependency MPI provider (e.g. point to
mpicc compiler wrapper).

* restore ability of dev-build to skip patches (#20351)

At some point in the past, the skip_patch argument was removed
from the call to package.do_install() this broke the --skip-patch
flag on the dev-build command.

* libyogrt: remove conflicts triggered by an invalid value (#20794)

fixes #20611

The conflict was triggered by an invalid value of the
'scheduler' variant. This causes Spack to error when libyogrt
facts are validated by the ASP-based concretizer.

* concretizer: dependency conditions cannot hold if package is external

fixes #20736

Before this one line fix we were erroneously deducing
that dependency conditions hold even if a package
was external.

This may result in answer sets that contain imposed
conditions on a node without the node being present
in the DAG, hence #20736.

* concretizer: require at least a dependency type to say the dependency holds

fixes #20784

Similarly to the previous bug, here we were deducing
conditions to be imposed on nodes that were not part
of the DAG.

* py-hovorod: fix typo on variant name in conflicts directive (#20906)

* [WIP] relocate.py: parallelize test replacement logic (#19690)

* sbang pushed back to callers;
star moved to util.lang

* updated unit test

* sbang test moved; local tests pass

Co-authored-by: Nathan Hanford <[email protected]>

* store sbang_install_path in buildinfo, use for subsequent relocation (#20768)

* Print groups properly for spack find -d (#20028)

* llvm: "master" branch is now "main" branch (#21411)

* add intel oneapi to compiler/pkg translations (#21448)

* adding environment to OneMKL packages so that examples will build (#21377)

* intel-oneapi-compilers: add  to LD_LIBRARY_PATH so that it finds libimf.so (#20717)

* add  to LD_LIBRARY_PATH so that it finds libimf.so

* amrex: fix handling of CUDA arch (#20786)

* amrex: fix handling of CUDA arch
* amrex: fix style
* amrex: fix bug
* Update var/spack/repos/builtin/packages/amrex/package.py
* Update var/spack/repos/builtin/packages/amrex/package.py

Co-authored-by: Axel Huebl <[email protected]>

* ecp-data-vis-sdk: Combine the vis and io SDK packages (#20737)

This better enables the collective set to be deployed togethor satisfying
eachothers dependencies

* r-sf: fix dependency error (#20898)

* improve documentation for Rocm (hip amd builds) (#20812)

* improve documentation

* astyle: Fix makefile for install parameter (#20899)

* llvm-doe: added new package (#20719)

The package contains duplicated code from llvm/package.py,
will supersede solve.

* r-e1071: added v1.7-4 (#20891)

* r-diffusionmap: added v1.2.0 (#20881)

* r-covr: added v3.5.1 (#20868)

* r-class: added v7.3-17 (#20856)

* py-h5py: HDF5_DIR is needed for ~mpi too (#20905)

For the `~mpi` variant, the environment variable `HDF5_DIR` is still required.  I moved this command out of the `+mpi` conditional.

* py-hovorod: fix typo on variant name in conflicts directive (#20906)

* fujitsu-fftw: Add new package (#20824)

* pocl: added v1.6 (#20932)

Made version 1.5 or lower conflicts with a64fx.

* PCL: add new package (#20933)

* r-rle: new package (#20916)

Common 'base' and 'stats' methods for 'rle' objects, aiming to make it
possible to treat them transparently as vectors.

* r-ellipsis: added v0.3.1 (#20913)

* libconfig: add build dependency on texinfo (#20930)

* r-flexmix: add v2.3-17 (#20924)

* r-fitdistrplus: add v1.1-3 (#20923)

* r-fit-models: add v0.64 (#20922)

* r-fields: add v11.6 (#20921)

* r-fftwtools: add v0.9-9 (#20920)

* r-farver: add v2.0.3 (#20919)

* r-expm: add v0.999-6 (#20918)

* cln: add build dependency on texinfo (#20928)

* r-expint: add v0.1-6 (#20917)

* r-envstats: add v2.4.0 (#20915)

* r-energy: add v1.7-7 (#20914)

* r-ellipse: add v0.4.2 (#20912)

* py-fiscalyear: add v0.3.0 (#20911)

* r-ecp: add v3.1.3 (#20910)

* r-plotmo: add v3.6.0 (#20909)

* Improve gcc detection in llvm. (#20189)

Co-authored-by: Tom Scogland <[email protected]>
Co-authored-by: Thomas Green <[email protected]>

* hatchet: updated urls (#20908)

* py-anuga: add new package (#20782)

* libvips: added v8.10.5 (#20902)

* libzmq: add platform conditions to libbsd dependency (#20893)

* r-dtw: add v1.22-3 (#20890)

* r-dt: add v0.17 (#20889)

* r-dosnow: add v1.0.19 (#20888)

* add version 1.0.16 to r-doparallel (#20886)

* add version 1.3.7 to r-domc (#20885)

* add version 0.9-15 to r-diversitree (#20884)

* add version 1.3-3 to r-dismo (#20883)

* add version 0.6.27 to r-digest (#20882)

* add version 1.5 to r-rngtools (#20887)

* add version 1.5.8 to r-dicekriging (#20877)

* add version 1.4.2 to r-httr (#20876)

* add version   1.28 to r-desolve (#20875)

* add version   2.2-5 to r-deoptim (#20874)

* add version   0.2-3 to r-deldir (#20873)

* add version   1.0.0 to r-crul (#20870)

* add version   1.1.0.1 to r-crosstalk (#20869)

* add version   1.0-1 to r-copula (#20867)

* add version 5.0.2 to r-rcppparallel (#20866)

* add version   2.0-1 to r-compositions (#20865)

* add version 0.4.10 to r-rlang (#20796)

* add version 0.3.6 to r-vctrs (#20878)

* amrex: add ROCm support (#20809)

* add version 2.0-0 to r-colorspace (#20864)

* add version 1.3-1 to r-coin (#20863)

* add version   0.19-4 to r-coda (#20862)

* add version 1.3.7 to r-clustergeneration (#20861)

* add version 0.3-58 to r-clue (#20860)

* add version 0.7.1 to r-clipr (#20859)

* add version 2.2.0 to r-cli (#20858)

* add version 0.4-3 to r-classint (#20857)

* add version 0.1.2 to r-globaloptions (#20855)

* add version 2.3-56 to r-chron (#20854)

* add version 0.4.10 to r-checkpoint (#20853)

* add version 2.0.0 to r-checkmate (#20852)

* add version 1.18.1 to r-catools (#20850)

* add version 1.2.2.2 to r-modelmetrics (#20849)

* add version 3.0-4 to r-cardata (#20847)

* add version 1.0.1 to r-caracas (#20846)

* r-lifecycle: new package at v0.2.0 (#20845)

* add version 3.0-10 to r-car (#20844)

* add version 3.4.5 to r-processx (#20843)

* add version 1.5-12.2 to r-cairo (#20842)

* add version 0.2.3 to r-cubist (#20841)

* add version 2.6 to r-rmarkdown (#20838)

* add version 1.2.1 to r-blob (#20819)

* add version 4.0.4 to r-bit (#20818)

* add version 2.4-1 to r-bio3d (#20816)

* add version 0.4.2.3 to r-bibtex (#20815)

* add version 3.1-4 to r-bayesm (#20807)

* add version 1.2.1 to r-backports (#20806)

* add version 2.0.3 to r-argparse (#20805)

* add version 5.4-1 to r-ape (#20804)

* add version 0.8-18 to r-amap (#20803)

* r-pixmap: added new package (#20795)

* zoltan: source code location change (#20787)

* refactor path logic

* added some paths to make compilers and libs discoverable

* add  to LD_LIBRARY_PATH so that it finds libimf.so
and cleanup PEP8

* refactor path logic

* adding paths to LIBRARY_PATH so compiler wrappers will find -lmpi

* added vals for CC=icx, CXX=icpx, FC=ifx to generated module

* back out changes to intel-oneapi-mpi, save for separate PR

* Update var/spack/repos/builtin/packages/intel-oneapi-compilers/package.py

path is joined in _ld_library_path()

Co-authored-by: Robert Cohn <[email protected]>

* set absolute paths to icx,icpx,ifx

* dang close parenthesis

Co-authored-by: Robert Cohn <[email protected]>
Co-authored-by: mic84 <[email protected]>
Co-authored-by: Axel Huebl <[email protected]>
Co-authored-by: Chuck Atkins <[email protected]>
Co-authored-by: darmac <[email protected]>
Co-authored-by: Danny Taller <[email protected]>
Co-authored-by: Tomoyasu Nojiri <[email protected]>
Co-authored-by: Shintaro Iwasaki <[email protected]>
Co-authored-by: Glenn Johnson <[email protected]>
Co-authored-by: Kelly (KT) Thompson <[email protected]>
Co-authored-by: Henrique Mendonça <[email protected]>
Co-authored-by: h-denpo <[email protected]>
Co-authored-by: Adam J. Stewart <[email protected]>
Co-authored-by: Thomas Green <[email protected]>
Co-authored-by: Tom Scogland <[email protected]>
Co-authored-by: Thomas Green <[email protected]>
Co-authored-by: Abhinav Bhatele <[email protected]>
Co-authored-by: a-saitoh-fj <[email protected]>
Co-authored-by: QuellynSnead <[email protected]>

* intel-oneapi-compilers/mpi: add module support (#20808)

Facilitate running intel-oneapi-mpi outside of Spack (set PATH,
LD_LIBRARY_PATH, etc. appropriately).

Co-authored-by: Robert Cohn <[email protected]>

* apple-clang: add correct path to compiler wrappers (#21662)

Follow-up to #17110

### Before
```bash
CC=/Users/Adam/spack/lib/spack/env/clang/clang; export CC
SPACK_CC=/usr/bin/clang; export SPACK_CC
PATH=...:/Users/Adam/spack/lib/spack/env/apple-clang:/Users/Adam/spack/lib/spack/env/case-insensitive:/Users/Adam/spack/lib/spack/env:...; export PATH
```

### After
```bash
CC=/Users/Adam/spack/lib/spack/env/clang/clang; export CC
SPACK_CC=/usr/bin/clang; export SPACK_CC
PATH=...:/Users/Adam/spack/lib/spack/env/clang:/Users/Adam/spack/lib/spack/env/case-insensitive:/Users/Adam/spack/lib/spack/env:...; export PATH
```

`CC` and `SPACK_CC` were being set correctly, but `PATH` was using the name of the compiler `apple-clang` instead of `clang`. For most packages, since `CC` was set correctly, nothing broke. But for packages using `Makefiles` that set `CC` based on `which clang`, it was using the system compilers instead of the compiler wrappers. Discovered when working on `[email protected]`.

An alternative fix would be to copy the symlinks in `env/clang` to `env/apple-clang`. Let me know if you think there's a better way to do this, or to test this.

* Resolve (post-cherry-picking) flake8 errors

* Update CHANGELOG and release version

* updates for new tutorial

update s3 bucket
update tutorial branch

* update tutorial public key

* respect -k/verify-ssl-false in _existing_url method (#21864)

* use package supplied autogen.sh (#20319)

* Python 3.10 support: collections.abc (#20441)

(cherry picked from commit 40a40e0265d6704a7836aeb30a776d66da8f7fe6)

* concretizer: simplify "fact" method (#21148)

The "fact" method before was dealing with multiple facts
registered per call, which was used when we were emitting
grounded rules from knowledge of the problem instance.

Now that the encoding is changed we can simplify the method
to deal only with a single fact per call.

(cherry picked from commit ba42c36f00fe40c047121a32117018eb93e0c4b1)

* Improve error message for inconsistencies in package.py (#21811)

* Improve error message for inconsistencies in package.py

Sometimes directives refer to variants that do not exist.
Make it such that:

1. The name of the variant
2. The name of the package which is supposed to have
   such variant
3. The name of the package making this assumption

are all printed in the error message for easier debugging.

* Add unit tests

(cherry picked from commit 7226bd64dc3b46a1ed361f1e9d7fb4a2a5b65200)

* Updates to support clingo-cffi (#20657)

* Support clingo when used with cffi

Clingo recently merged in a new Python module option based on cffi.

Compatibility with this module requires a few changes to spack - it does not automatically convert strings/ints/etc to Symbol and clingo.Symbol.string throws on failure.

manually convert str/int to clingo.Symbol types
catch stringify exceptions
add job for clingo-cffi to Spack CI
switch to potassco-vendored wheel for clingo-cffi CI
on_unsat argument when cffi

(cherry picked from commit 93ed1a410c4a202eab3a68769fd8c0d4ff8b1c8e)

* Run clingo-cffi tests in a container (#21913)

There clingo-cffi job has two issues to be solved:

1. It uses the default concretizer
2. It requires a package from https://test.pypi.org/simple/

The former can be fixed by setting the SPACK_TEST_SOLVER
environment variable to "clingo".

The latter though requires clingo-cffi to be pushed to a
more stable package index (since https://test.pypi.org/simple/
is meant as a scratch version of PyPI that can be wiped at
any time).

For the time being run the tests in a container. Switch back to
PyPI whenever a new official version of clingo will be released.

* repo: generalize "swap" context manager to also accept paths

The method is now called "use_repositories" and
makes it clear in the docstring that it accepts
as arguments either Repo objects or paths.

Since there was some duplication between this
contextmanager and "use_repo" in the testing framework,
remove the latter and use spack.repo.use_repositories
across the entire code base.

Make a few adjustment to MockPackageMultiRepo, since it was
stating in the docstring that it was supposed to mock
spack.repo.Repo and was instead mocking spack.repo.RepoPath.

(cherry picked from commit 1a8963b0f4c11c4b7ddd347e6cd95cdc68ddcbe0)

* Move context manager to swap the current store into spack.store

The context manager can be used to swap the current
store temporarily, for any use case that may need it.

(cherry picked from commit cb2c233a97073f8c5d89581ee2a2401fef5f878d)

* Move context manager to swap the current configuration into spack.config

The context manager can be used to swap the current
configuration temporarily, for any use case that may need it.

(cherry picked from commit 553d37a6d62b05f15986a702394f67486fa44e0e)

* bugfix for target adjustments on target ranges (#20537)

(cherry picked from commit 61c1b71d38e62a5af81b3b7b8a8d12b954d99f0a)

* Added a context manager to swap architectures

This solves a few FIXMEs in conftest.py, where
we were manipulating globals and seeing side
effects prior to registering fixtures.

This commit solves the FIXMEs, but introduces
a performance regression on tests that may need
to be investigated

(cherry picked from commit 4558dc06e21e01ab07a43737b8cb99d1d69abb5d)

* make `spack fetch` work with environments (#19166)

* make `spack fetch` work with environments
* previously: `spack fetch` required the explicit statement of
              the specs to be fetched, even when in an environment
* now: if there is no spec(s) provided to `spack fetch` we check
       if an environment is active and if yes we fetch all
       uninstalled specs.

* clingo: prefer master branch

Most people installing `clingo` with Spack are going to be doing it to
use the new concretizer, and that requires the `master` branch.

- [x] make `master` the default so we don't have to keep telling people
  to install `clingo@master`. We'll update the preferred version when
  there's a new release.

* Clingo: fix missing import (#21364)

* clingo: added a package with option for bootstrapping clingo (#20652)

* clingo/clingo-bootstrap: added a package with option for bootstrapping clingo

package builds in Release mode
uses GCC options to link libstdc++ and libgcc statically

* clingo-bootstrap: apple-clang options to bootstrap statically on darwin

* clingo: fix the path of the Python interpreter

In case multiple Python versions are in the same prefix
(e.g. when clingo is built against an external Python),
it may happen that the Python used by CMake does not
match the corresponding node in the current spec.

This is fixed here by defining "Python_EXECUTABLE"
properly as a hint to CMake.

* clingo: the commit for "spack" version has been updated.

* clingo: fix typo (#22444)

* clingo-bootstrap: account for cray platform (#22460)

(cherry picked from commit 138312efabd534fa42d1a16e172e859f0d2b5842)

* Bootstrap clingo from sources (#21446)

* Allow the bootstrapping of clingo from sources

Allow python builds with system python as external
for MacOS

* Ensure consistent configuration when bootstrapping clingo

This commit uses context managers to ensure we can
bootstrap clingo using a consistent configuration
regardless of the use case being managed.

* Github actions: test clingo with bootstrapping from sources

* Add command to inspect and clean the bootstrap store

 Prevent users to set the install tree root to the bootstrap store

* clingo: documented how to bootstrap from sources

Co-authored-by: Gregory Becker <[email protected]>
(cherry picked from commit 10e9e142b75c6ca8bc61f688260c002201cc1b22)

* bootstrap: account for platform specific configuration scopes (#22489)

This change accounts for platform specific configuration scopes,
like ~/.spack/linux, during bootstrapping. These scopes were
previously not accounted for and that was causing issues e.g.
when searching for compilers.

(cherry picked from commit 413c422e53843a9e33d7b77a8c44dcfd4bf701be)

* concretizer: unify logic for spec conditionals

This builds on #20638 by unifying all the places in the concretizer where
things are conditional on specs. Previously, we duplicated a common spec
conditional pattern for dependencies, virtual providers, conflicts, and
externals. That was introduced in #20423 and refined in #20507, and
roughly looked as follows.

Given some directives in a package like:

```python
depends_on("[email protected]+bar", when="@2.0+variant")
provides("mpi@2:", when="@1.9:")
```

We handled the `@2.0+variant` and `@1.9:` parts by generating generated
`dependency_condition()`, `required_dependency_condition()`, and
`imposed_dependency_condition()` facts to trigger rules like this:

```prolog
dependency_conditions_hold(ID, Parent, Dependency) :-
  attr(Name, Arg1)             : required_dependency_condition(ID, Name, Arg1);
  attr(Name, Arg1, Arg2)       : required_dependency_condition(ID, Name, Arg1, Arg2);
  attr(Name, Arg1, Arg2, Arg3) : required_dependency_condition(ID, Name, Arg1, Arg2, Arg3);
  dependency_condition(ID, Parent, Dependency);
  node(Parent).
```

And we handled `[email protected]+bar` and `mpi@2:` parts ("imposed constraints")
like this:

```prolog
attr(Name, Arg1, Arg2) :-
  dependency_conditions_hold(ID, Package, Dependency),
  imposed_dependency_condition(ID, Name, Arg1, Arg2).

attr(Name, Arg1, Arg2, Arg3) :-
  dependency_conditions_hold(ID, Package, Dependency),
  imposed_dependency_condition(ID, Name, Arg1, Arg2, Arg3).
```

These rules were repeated with different input predicates for
requirements (e.g., `required_dependency_condition`) and imposed
constraints (e.g., `imposed_dependency_condition`) throughout
`concretize.lp`. In #20638 it got to be a bit confusing, because we used
the same `dependency_condition_holds` predicate to impose constraints on
conditional dependencies and virtual providers. So, even though the
pattern was repeated, some of the conditional rules were conjoined in a
weird way.

Instead of repeating this pattern everywhere, we now have *one* set of
consolidated rules for conditions:

```prolog
condition_holds(ID) :-
  condition(ID);
  attr(Name, A1)         : condition_requirement(ID, Name, A1);
  attr(Name, A1, A2)     : condition_requirement(ID, Name, A1, A2);
  attr(Name, A1, A2, A3) : condition_requirement(ID, Name, A1, A2, A3).

attr(Name, A1)         :- condition_holds(ID), imposed_constraint(ID, Name, A1).
attr(Name, A1, A2)     :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2).
attr(Name, A1, A2, A3) :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2, A3).
```

this allows us to use `condition(ID)` and `condition_holds(ID)` to
encapsulate the conditional logic on specs in all the scenarios where we
need it. Instead of defining predicates for the requirements and imposed
constraints, we generate the condition inputs with generic facts, and
define predicates to associate the condition ID with a particular
scenario. So, now, the generated facts for a condition look like this:

```prolog
condition(121).
condition_requirement(121,"node","cairo").
condition_requirement(121,"variant_value","cairo","fc","True").
imposed_constraint(121,"version_satisfies","fontconfig","2.10.91:").
dependency_condition(121,"cairo","fontconfig").
dependency_type(121,"build").
dependency_type(121,"link").
```

The requirements and imposed constraints are generic, and we associate
them with their meaning via the id. Here, `dependency_condition(121,
"cairo", "fontconfig")` tells us that condition 121 has to do with the
dependency of `cairo` on `fontconfig`, and the conditional dependency
rules just become:

```prolog
dependency_holds(Package, Dependency, Type) :-
  dependency_condition(ID, Package, Dependency),
  dependency_type(ID, Type),
  condition_holds(ID).
```

Dependencies, virtuals, conflicts, and externals all now use similar
patterns, and the logic for generating condition facts is common to all
of them on the python side, as well. The more specific routines like
`package_dependencies_rules` just call `self.condition(...)` to get an id
and generate requirements and imposed constraints, then they generate
their extra facts with the returned id, like this:

```python
    def package_dependencies_rules(self, pkg, tests):
        """Translate 'depends_on' directives into ASP logic."""
        for _, conditions in sorted(pkg.dependencies.items()):
            for cond, dep in sorted(conditions.items()):
                condition_id = self.condition(cond, dep.spec, pkg.name)  # create a condition and get its id
                self.gen.fact(fn.dependency_condition(  # associate specifics about the dependency w/the id
                    condition_id, pkg.name, dep.spec.name
                ))
        # etc.
```

- [x] unify generation and logic for conditions
- [x] use unified logic for dependencies
- [x] use unified logic for virtuals
- [x] use unified logic for conflicts
- [x] use unified logic for externals

LocalWords:  concretizer mpi attr Arg concretize lp cairo fc fontconfig
LocalWords:  virtuals def pkg cond dep fn refactor github py

* bugfix: do not generate dep conditions when no dependency

We only consider test dependencies some of the time. Some packages are
*only* test dependencies. Spack's algorithm was previously generating
dependency conditions that could hold, *even* if there was no potential
dependency type.

- [x] change asp.py so that this can't happen -- we now only generate
      dependency types for possible dependencies.

* bugfix: allow imposed constraints to be overridden in special cases

In most cases, we want condition_holds(ID) to imply any imposed
constraints associated with the ID. However, the dependency relationship
in Spack is special because it's "extra" conditional -- a dependency
*condition* may hold, but we have decided that externals will not have
dependencies, so we need a way to avoid having imposed constraints appear
for nodes that don't exist.

This introduces a new rule that says that constraints are imposed
*unless* we define `do_not_impose(ID)`. This allows rules like
dependencies, which rely on more than just spec conditions, to cancel
imposed constraints.

We add one special case for this: dependencies of externals.

* spack location: bugfix for out of source build dirs (#22348)

* Channelflow: Fix the package. (#22483)

A search and replace went wrong in 2264e30d99d8b9fbdec8fa69b594e53d8ced15a1.

Thanks to @wadudmiah who reported this issue.

* Make SingleFileScope able to repopulate the cache after clearing it (#22559)

fixes #22547

SingleFileScope was not able to repopulate its cache before this
change. This was affecting the configuration seen by environments
using clingo bootstrapped from sources, since the bootstrapping
operation involved a few cache invalidation for config files.

* ASP-based solver: model disjoint sets for multivalued variants (#22534)

* ASP-based solver: avoid adding values to variants when they're set

fixes #22533
fixes #21911

Added a rule that prevents any value to slip in a variant when the
variant is set explicitly. This is relevant for multi-valued variants,
in particular for those that have disjoint sets of values.

* Ensure disjoint sets have a clear semantics for external packages

* clingo: modify recipe for bootstrapping (#22354)

* clingo: modify recipe for bootstrapping

Modifications:
- clingo builds with shared Python only if ^python+shared
- avoid building the clingo app for bootstrapping
- don't link to libpython when bootstrapping

* Remove option that breaks on linux

* Give more hints for the current Python

* Disable CLINGO_BUILD_PY_SHARED for bootstrapping

* bootstrapping: try to detect the current python from std library

This is much faster than calling external executables

* Fix compatibility with Python 2.6

* Give hints on which compiler and OS to use when bootstrapping

This change hints which compiler to use for bootstrapping clingo
(either GCC or Apple Clang on MacOS). On Cray platforms it also
hints to build for the frontend system, where software is meant
to be installed.

* Use spec_for_current_python to constrain module requirement

(cherry picked from commit d5fa509b072f0e58f00eaf81c60f32958a9f1e1d)

* Externals are preferred even when they have non-default variant values

fixes #22596

Variants which are specified in an external spec are not
scored negatively if they encode a non-default value.

* Enforce uniqueness of the version_weight atom per node

fixes #22565

This change enforces the uniqueness of the version_weight
atom per node(Package) in the DAG. It does so by applying
FTSE and adding an extra layer of indirection with the
possible_version_weight/2 atom.

Before this change it may have happened that for the same
node two different version_weight/2 were in the answer set,
each of which referred to a different spec with the same
version, and their weights would sum up.

This lead to unexpected result like preferring to build a
new version of an external if the external version was
older.

* bugfix for active when pkg is already active error (#22587)

* bugfix for active when pkg is already active error

Co-authored-by: Greg Becker <[email protected]>

* Fix clearing cache of InternalConfigScope (#22609)

Co-authored-by: Massimiliano Culpo <[email protected]>

* Bootstrap: add _builtin config scope (#22610)

(cherry picked from commit a37c916dff5a5c6e72f939433931ab69dfd731bd)

* Bootstrapping: swap store before configuration (#22631)

fixes #22294

A combination of the swapping order for global variables and
the fact that most of them are lazily evaluated resulted in
custom install tree not being taken into account if clingo
had to be bootstrapped.

This commit fixes that particular issue, but a broader refactor
may be needed to ensure that similar situations won't affect us
in the future.

* Remove erroneous warnings about quotes for from_source_file (#22767)

* "spack build-env" searches env for relevant spec (#21642)

If you install packages using spack install in an environment with
complex spec constraints, and the install fails, you may want to
test out the build using spack build-env; one issue (particularly
if you use concretize: together) is that it may be hard to pass
the appropriate spec that matches what the environment is
attempting to install.

This updates the build-env command to default to pulling a matching
spec from the environment rather than concretizing what the user
provides on the command line independently.

This makes a similar change to spack cd.

If the user-provided spec matches multiple specs in the environment,
then these commands will now report an error and display all
matching specs (to help the user specify).

Co-authored-by: Gregory Becker <[email protected]>

* ASP-based solver: assign OS correctly with inheritance from parent (#22896)

fixes #22871

When in presence of multiple choices for the operating system
we were lacking a rule to derive the node OS if it was
inherited.

* Externals with merged prefixes (#22653)

We remove system paths from search variables like PATH and 
from -L options because they may contain many packages and
could interfere with Spack-built packages. External packages 
may be installed to prefixes that are not actually system paths 
but are still "merged" in the sense that many other packages are
installed there. To avoid conflicts, this PR places all external
packages at the end of search paths.

* ASP-based solver: suppress warnings when constructing facts (#23090)

fixes #22786

Trying to get optimization flags for a specific target from
a compiler may trigger warnings. In the context of constructing
facts for the ASP-based solver we don't want to show these
warnings to the user, so here we simply ignore them.

* Use Python's built-in machinery to import compilers (#23290)

* Add "spack [cd|location] --source-dir" (#22321)

* spack location: fix usage without args (#22755)

* Import hooks using Python's built-in machinery (#23288)

The function we coded in Spack to load Python modules with arbitrary
names from a file seem to have issues with local imports. For
loading hooks though it is unnecessary to use such functions, since
we don't care to bind a custom name to a module nor we have to load
it from an unknown location.

This PR thus modifies spack.hook in the following ways:

- Use __import__ instead of spack.util.imp.load_source (this
  addresses #20005)
- Sync module docstring with all the hooks we have
- Avoid using memoization in a module function
- Marked with a leading underscore all the names that are supposed
  to stay local

* ASP-based solver: no intermediate package for concretizing together (#23307)

The ASP-based solver can natively manage cases where more than one root spec is given, and is able to concretize all the roots together (ensuring one spec per package at most).

Modifications:
- [x] When concretising together an environment the ASP-based solver calls directly its `solve` method rather than constructing a temporary fake root package.

* ASP-based solve: minimize compiler mismatches (#23016)

fixes #22718

Instead of trying to maximize the number of
matches (preferred behavior), try to minimize
the number of mismatches (unwanted behavior).

* performance: speed up existence checks in packages (#23661)

Spack doesn't require users to manually index their repos; it reindexes the indexes automatically when things change. To determine when to do this, it has to `stat()` all package files in each repository to make sure that indexes up to date with packages. We currently index virtual providers, patches by sha256, and tags on packages.

When this was originally implemented, we ran the checker all the time, at startup, but that was slow (see #7587). But we didn't go far enough -- it still consults the checker and does all the stat operations just to see if a package exists (`Repo.exists()`).  That might've been a wash in 2018, but as the number of packages has grown, it's gotten slower -- checking 5k packages is expensive and users see this for small operations.  It's a win now to make `Repo.exists()` check files directly.

**Fix:**

This PR does a number of things to speed up `spack load`, `spack info`, and other commands:

- [x] Make `Repo.exists()` check files directly again with `os.path.exists()` (this is the big one)
- [x] Refactor `Spec.satisfies()` so that a checking for virtual packages only happens if needed
      (avoids some calls to exists())
- [x] Avoid calling `Repo.exists(spec)` in `Repo.get()`. `Repo.get()` will ultimately try to load
      a `package.py` file anyway; we can let the failure to load it indicate that the package doesn't
      exist, and avoid another call to exists().
- [x] Fix up some comments in spec parsing
- [x] Call `UnknownPackageError` more consistently in `repo.py`

* Style fixes for v0.16.2 release

* Update CHANGELOG and release version for v0.16.2

* Update command to setup tutorial (#24488)

* Flake8, bash completion

Co-authored-by: Axel Huebl <[email protected]>
Co-authored-by: Danny Taller <[email protected]>
Co-authored-by: Greg Becker <[email protected]>
Co-authored-by: Adam J. Stewart <[email protected]>
Co-authored-by: Martin Aumüller <[email protected]>
Co-authored-by: Todd Gamblin <[email protected]>
Co-authored-by: Massimiliano Culpo <[email protected]>
Co-authored-by: George Hartzell <[email protected]>
Co-authored-by: MichaelLaufer <[email protected]>
Co-authored-by: michael laufer <[email protected]>
Co-authored-by: Andrew W Elble <[email protected]>
Co-authored-by: Harmen Stoppels <[email protected]>
Co-authored-by: Robert Maynard <[email protected]>
Co-authored-by: Massimiliano Culpo <[email protected]>
Co-authored-by: Tamara Dahlgren <[email protected]>
Co-authored-by: Scott Wittenburg <[email protected]>
Co-authored-by: Robert Cohn <[email protected]>
Co-authored-by: Ye Luo <[email protected]>
Co-authored-by: Frank Willmore <[email protected]>
Co-authored-by: Robert Underwood <[email protected]>
Co-authored-by: Henrique Mendonça <[email protected]>
Co-authored-by: Nathan Hanford <[email protected]>
Co-authored-by: Nathan Hanford <[email protected]>
Co-authored-by: eugeneswalker <[email protected]>
Co-authored-by: Yang Zongze <[email protected]>
Co-authored-by: mic84 <[email protected]>
Co-authored-by: Chuck Atkins <[email protected]>
Co-authored-by: darmac <[email protected]>
Co-authored-by: Tomoyasu Nojiri <[email protected]>
Co-authored-by: Shintaro Iwasaki <[email protected]>
Co-authored-by: Glenn Johnson <[email protected]>
Co-authored-by: Kelly (KT) Thompson <[email protected]>
Co-authored-by: h-denpo <[email protected]>
Co-authored-by: Thomas Green <[email protected]>
Co-authored-by: Tom Scogland <[email protected]>
Co-authored-by: Thomas Green <[email protected]>
Co-authored-by: Abhinav Bhatele <[email protected]>
Co-authored-by: a-saitoh-fj <[email protected]>
Co-authored-by: QuellynSnead <[email protected]>
Co-authored-by: Tamara Dahlgren <[email protected]>
Co-authored-by: Phil Tooley <[email protected]>
Co-authored-by: Josh Essman <[email protected]>
Co-authored-by: Andreas Baumbach <[email protected]>
Co-authored-by: Maxim Belkin <[email protected]>
Co-authored-by: Rémi Lacroix <[email protected]>
Co-authored-by: Cyrus Harrison <[email protected]>
Co-authored-by: Peter Scheibel <[email protected]>
Co-authored-by: Todd Gamblin <[email protected]>
matz-e added a commit to BlueBrain/spack that referenced this pull request Sep 23, 2021
* py-ipykernel: fix install (#19617)

There is a post-install routine in `ipykernel` that needs to be
called for proper registration with jupyter.

* hip support for umpire, chai, raja, camp (#19715)

* create HipPackage base class and do some refactoring

* comments and added conflict to raja for openmp with hip

* fix error handling for spack test results command (#19987)

* py-ipykernel: fix bug in phase method (#19986)

* py-ipykernel: fix bug in phase method

* Fix bug in executable calling

* recognize macOS 11.1 as big sur (#20038)

Big Sur versions go 11.0, 11.0.1, 11.1 (vs. prior versions that
only used the minor component)

Co-authored-by: Todd Gamblin <[email protected]>

* Docs: remove duplication in Command Reference (#20021)

* concretizer: treat conditional providers correctly (#20086)

refers #20040

This modification emits rules like:

provides_virtual("netlib-lapack","blas") :- variant_value("netlib-lapack","external-blas","False").

for packages that provide virtual dependencies conditionally instead
of a fact that doesn't account for the condition.

* concretizer: allow a bool to be passed as argument for tests dependencies (#20082)

refers #20079

Added docstrings to 'concretize' and 'concretized' to
document the format for tests.

Added tests for the activation of test dependencies.

* concretizer: prioritize matching compilers over newer versions (#20020)

fixes #20019

Before this modification having a newer version of a node came
at higher priority in the optimization than having matching
compilers. This could result in unexpected configurations for
packages with conflict directives on compilers of the type:

conflicts('%[email protected]:', when='@:A.B')

where changing the compiler for just that node is preferred to
lower the node version to less than 'A.B'. Now the priority has
been switched so the solver will try to lower the version of the
nodes in question before changing their compiler.

* concretizer: treat target ranges in directives correctly (#19988)

fixes #19981

This commit adds support for target ranges in directives,
for instance:

conflicts('+foo', when='target=x86_64:,aarch64:')

If any target in a spec body is not a known target the
following clause will be emitted:

node_target_satisfies(Package, TargetConstraint)

when traversing the spec and a definition of
the clause will then be printed at the end similarly
to what is done for package and compiler versions.

* Typos: add missing closing parens (#20174)

* concretizer: swap priority of selecting provider and default variant (#20182)

refers #20040

Before this PR optimization rules would have selected default
providers at a higher priority than default variants. Here we
swap this priority and we consider variants that are forced by
any means (root spec or spec in depends_on clause) the same as
if they were with a default value.

This prevents the solver from avoiding expected configurations
just because they contain directives like:

depends_on('pkg+foo')

and `+foo` is not the default variant value for pkg.

* concretizer: remove ad-hoc rule for external packages (#20193)

fixes #20040

Matching compilers among nodes has been prioritized
in #20020. Selection of default variants has been
tuned in #20182. With this setup there is no need
to have an ad-hoc rule for external packages. On
the contrary it should be removed to prefer having
default variant values over more external nodes in
the DAG.

* spec: return early from concretization if a spec is already concrete (#20196)

* Fixes compile time errors (#20006)

Co-authored-by: michael laufer <[email protected]>

* concretizer: don't optimize emitting version_satisfies() (#20128)

When all versions were allowed a version_satisfies rule was not emitted,
and this caused conditional directives to fail.

* boost: disable find_package's config mode for boost prior to v1.70.0 (#20198)

* Fix hipcc once more (#20095)

* concretizer: try hard to infer the real version of compilers (#20099)

fixes #20055

Compiler with custom versions like gcc@foo are not currently
matched to the appropriate targets. This is because the
version of spec doesn't match the "real" version of the
compiler.

This PR replicates the strategy used in the original
concretizer to deal with that and tries to detect the real
version of compilers if the version in the spec returns no
results.

* concretizer: call inject_patches_variants() on the roots of the specs (#20203)

As was done in the old concretizer. Fixes an issue where conditionally
patched dependencies did not show up in spec (gdal+jasper)

* avoid circular import (#20236)

* environment installs: fix reporting. (#20004)

PR #15702 changed the invocation of the report context when installing
specs, do the same when building environments.

* concretizer: restrict maximizing variant values to MV variants (#20194)

* concretizer: each external version is allowed by definition (#20247)

Registering external versions among the lists of allowed ones
generates the correct rules for `version_satisfies`

* VTK-m: update to specify correct requirements to kokkos (#20097)

* concretizer: refactor handling of special variants dev_build and patches

Other parts of the concretizer code build up lists of things we can't
know without traversing all specs and packages, and they output these
list at the very end.

The code for this for variant values from spec literals was intertwined
with the code for traversing the input specs. This only covers the input
specs and misses variant values that might come from directives in
packages.

- [x] move ad-hoc value handling code into spec_clauses so we do it in
  one place for CLI and packages

- [x] move handling of `variant_possible_value`, etc. into
  `concretize.lp`, where we can automatically infer variant existence
  more concisely.

- [x] simplify/clarify some of the code for variants in `spec_clauses()`

* bugfix: work around issue handling packages not in any repo

* concretizer: try hard to obtain all needed variant_possible_value()'s (#20102)

Track all the variant values mentioned when emitting constraints, validate them
and emit a fact that allows them as possible values.

This modification ensures that open-ended variants (variants accepting any string 
or any integer) are projected to the finite set of values that are relevant for this 
concretization.

* Tests: enable re-use of post-install tests in smoke tests (#20298)

* concretizer: remove clingo command-line driver (#20362)

I was keeping the old `clingo` driver code around in case we had to run
using the command line tool instad of through the Python interface.

So far, the command line is faster than running through Python, but I'm
working on fixing that.  I found that if I do this:

```python
control = clingo.Control()
control.load("concretize.lp")
control.load("hdf5.lp")       # code from spack solve --show asp hdf5
control.load("display.lp")

control.ground([("base", [])])
control.solve(...)
```

It's just as fast as the command line tool. So we can always generate the
code and load it manually if we need to -- we don't need two drivers for
clingo. Given that the python interface is also the only way to get unsat
cores, I think we pretty much have to use it.

So, I'm removing the old command line driver and other unused code. We
can dig it up again from the history if it is needed.

* package sanity: ensure all variant defaults are allowed values (#20373)

* concretizer: don't use one_of_iff for range constraints (#20383)

Currently, version range constraints, compiler version range constraints,
and target range constraints are implemented by generating ground rules
from `asp.py`, via `one_of_iff()`.  The rules look like this:

```
version_satisfies("python", "2.6:") :- 1 { version("python", "2.4"); ... } 1.
1 { version("python", "2.4"); ... } 1. :- version_satisfies("python", "2.6:").
```

So, `version_satisfies(Package, Constraint)` is true if and only if the
package is assigned a version that satisfies the constraint. We
precompute the set of known versions that satisfy the constraint, and
generate the rule in `SpackSolverSetup`.

We shouldn't need to generate already-ground rules for this. Rather, we
should leave it to the grounder to do the grounding, and generate facts
so that the constraint semantics can be defined in `concretize.lp`.

We can replace rules like the ones above with facts like this:

```
version_satisfies("python", "2.6:", "2.4")
```

And ground them in `concretize.lp` with rules like this:

```
1 { version(Package, Version) : version_satisfies(Package, Constraint, Version) } 1
  :- version_satisfies(Package, Constraint).
version_satisfies(Package, Constraint)
  :- version(Package, Version), version_satisfies(Package, Constraint, Version).
```

The top rule is the same as before. It makes conditional dependencies and
other places where version constraints are used work properly. Note that
we do not need the cardinality constraint for the second rule -- we
already have rules saying there can be only one version assigned to a
package, so we can just infer from `version/2` `version_satisfies/3`.
This form is also safe for grounding -- If we used the original form we'd
have unsafe variables like `Constraint` and `Package` -- the original
form only really worked when specified as ground to begin with.

- [x] use facts instead of generating rules for package version constraints
- [x] use facts instead of generating rules for compiler version constraints
- [x] use facts instead of generating rules for target range constraints
- [x] remove `one_of_iff()` and `iff()` as they're no longer needed

* Fix comparisons for abstract specs (#20341)

bug only relevant for python3

* unit-tests: ensure that installed packages can be reused (#20307)

refers #20292

Added a unit test that ensures we can reuse installed
packages even if in the repository variants have been
removed or added.

* ci: fixes for compiler bootstrapping (#17563)

This PR addresses a number of issues related to compiler bootstrapping.

Specifically:
1. Collect compilers to be bootstrapped while queueing in installer
Compiler tasks currently have an incomplete list in their task.dependents,
making those packages fail to install as they think they have not all their
dependencies installed. This PR collects the dependents and sets them on
compiler tasks.

2. allow boostrapped compilers to back off target
Bootstrapped compilers may be built with a compiler that doesn't support
the target used by the rest of the spec.  Allow them to build with less
aggressive target optimization settings.

3. Support for target ranges
Backing off the target necessitates computing target ranges, so make Spack
handle those properly.  Notably, this adds an intersection method for target
ranges and fixes the way ranges are satisfied and constrained on Spec objects.

This PR also:
- adds testing
- improves concretizer handling of target ranges

Co-authored-by: Harmen Stoppels <[email protected]>
Co-authored-by: Gregory Becker <[email protected]>
Co-authored-by: Massimiliano Culpo <[email protected]>

* asp: memoize the list of all target_specs to speed-up setup phase (#20473)

* asp: memoize the list of all target_specs to speed-up setup phase

* asp: memoize using a cache per solver object

* concretizer: add #defined statements to avoid warnings.

`version_satisfies/2` and `node_compiler_version_satisfies/3` are
generated but need `#defined` directives to avoid " info: atom does not
occur in any rule head:" warnings.

* concretizer: pull _develop_specs_from_env out of main setup loop

* concretizer: spec_clauses should traverse dependencies

There are currently no places where we do not want to traverse
dependencies in `spec_clauses()`, so simplify the logic by consolidating
`spec_traverse_clauses()` with `spec_clauses()`.

* concretizer: move conditional dependency logic into `concretize.lp`

Continuing to convert everything in `asp.py` into facts, make the
generation of ground rules for conditional dependencies use facts, and
move the semantics into `concretize.lp`.

This is probably the most complex logic in Spack, as dependencies can be
conditional on anything, and we need conditional ASP rules to accumulate
and map all the dependency conditions to spec attributes.

The logic looks complicated, but essentially it accumulates any
constraints associated with particular conditions into a fact associated
with the condition by id. Then, if *any* condition id's fact is True, we
trigger the dependency.

This simplifies the way `declared_dependency()` works -- the dependency
is now declared regardless of whether it is conditional, and the
conditions are handled by `dependency_condition()` facts.

* concretizer: avoid redundant grounding on dependency types

* concretizer: emit facts for constraints on imposed dependencies

* concretizer: emit facts for integrity constraints

* concretizer: fix failing unit tests

* concretizer: optimized loop on node platforms

We can speed-up the computation by avoiding a
double loop in a cardinality constraint and
enforcing the rule instead as an integrity
constraint.

* concretizer: optimize loop on compiler version

Similar to the optimization on platform

* concretizer: refactor conditional rules to be less repetitious (#20507)

We have to repeat all the spec attributes in a number of places in
`concretize.lp`, and Spack has a fair number of spec attributes. If we
instead add some rules up front that establish equivalencies like this:

```
    node(Package) :- attr("node", Package).
    attr("node", Package) :- node(Package).

    version(Package, Version) :- attr("version", Package, Version).
    attr("version", Package, Version) :- version(Package, Version).
```

We can rewrite most of the repetitive conditions with `attr` and repeat
only for each arity (there are only 3 arities for spec attributes so far)
as opposed to each spec attribute. This makes the logic easier to read
and the rules easier to follow.

Co-authored-by: Massimiliano Culpo <[email protected]>

* Add Intel oneAPI packages (#20411)

This creates a set of packages which all use the same script to install
components of Intel oneAPI. This includes:

* An inheritable IntelOneApiPackage which knows how to invoke the
  installation script based on which components are requested
* For components which include headers/libraries, an inheritable
  IntelOneApiLibraryPackage is provided to locate them
* Individual packages for DAL, DNN, TBB, etc.
* A package for the Intel oneAPI compilers (icx/ifx). This also includes
  icc/ifortran but these are not currently detected in this PR

* bugfix: do not write empty default dicts/lists in envs (#20526)

Environment yaml files should not have default values written to them.

To accomplish this, we change the validator to not add the default values to yaml. We rely on the code to set defaults for all values (and use defaulting getters like dict.get(key, default)).

Includes regression test.

* concretizer: generate facts for externals

Generate only facts for external specs. Substitute the
use of already grounded rules with non-grounded rules
in concretize.lp

* bugfix: infinite loop when building a set from incomplete specs (#20649)

This code in `SpecBuilder.build_specs()` introduced in #20203, can loop
seemingly interminably for very large specs:

```python
set([spec.root for spec in self._specs.values()])
```

It's deceptive, because it seems like there must be an issue with
`spec.root`, but that works fine. It's building the set afterwards that
takes forever, at least on `r-rminer`. Currently if you try running
`spack solve r-rminer`, it loops infinitely and spins up your fan.

The issue (I think) is that the spec is not yet complete when this is
run, and something is going wrong when constructing and comparing so many
values produced by `_cmp_key()`. We can investigate the efficiency of
`_cmp_key()` separately, but for now, the fix is:

```python
roots = [spec.root for spec in self._specs.values()]
roots = dict((id(r), r) for r in roots)
```

We know the specs in `self._specs` are distinct (they just came out of
the solver), so we can just use their `id()` to unique them here. This
gets rid of the infinite loop.

* concretizer: more detailed section headers in concretize.lp

* concretizer: make _condtion_id_counter an iterator

* concretizer: consolidate handling of virtuals into spec_clauses

* concretizer: convert virtuals to facts; move all rules to `concretize.lp`

This converts the virtual handling in the new concretizer from
already-ground rules to facts. This is the last thing that needs to be
refactored, and it converts the entire concretizer to just use facts.

The previous way of handling virtuals hinged on rules involving
`single_provider_for` facts that were tied to the virtual and a version
range. The new method uses the condition pattern we've been using for
dependencies, externals, and conflicts.

To handle virtuals as conditions, we impose constraints on "fake" virtual
specs in the logic program. i.e., `version_satisfies("mpi", "2.0:",
"2.0")` is legal whereas before we wouldn't have seen something like
this. Currently, constriants are only handled on versions -- we don't
handle variants or anything else yet, but they key change here is that we
*could*. For a long time, virtual handling in Spack has only dealt with
versions, and we'd like to be able to handle variants as well. We could
easily add an integrity constraint to handle variants like the one we use
for versions.

One issue with the implementation here is that virtual packages don't
actually declare possible versions like regular packages do. To get
around that, we implement an integrity constraint like this:

    :- virtual_node(Virtual),
       version_satisfies(Virtual, V1), version_satisfies(Virtual, V2),
       not version_constraint_satisfies(Virtual, V1, V2).

This requires us to compare every version constraint to every other, both
in program generation and within the concretizer -- so there's a
potentially quadratic evaluation time on virtual constraints because we
don't have a real version to "anchor" things to. We just say that all the
constraints need to agree for the virtual constraint to hold.

We can investigate adding synthetic versions for virtuals in the future,
to speed this up.

* concretizer: remove rule generation code from concretizer

Our program only generates facts now, so remove all unused code related
to generating cardinality constraints and rules.

* concretizer: simplify handling of virtual version constraints

Previously, the concretizer handled version constraints by comparing all
pairs of constraints and ensuring they satisfied each other. This led to
INCONSISTENT ressults from clingo, due to ambiguous semantics like:

    version_constraint_satisfies("mpi", ":1", ":3")
    version_constraint_satisfies("mpi", ":3", ":1")

To get around this, we introduce possible (fake) versions for virtuals,
based on their constraints. Essentially, we add any Versions,
VersionRange endpoints, and all such Versions and endpoints from
VersionLists to the constraint. Virtuals will have one of these synthetic
versions "picked" by the solver. This also allows us to remove a special
case from handling of `version_satisfies/3` -- virtuals now work just
like regular packages.

* concretizer: use consistent naming for compiler predicates (#20677)

Every other predicate in the concretizer uses a `_set` suffix to
implement user- or package-supplied settings, but compiler settings use a
`_hard` suffix for this. There's no difference in how they're used, so
make the names the same.

- [x] change `node_compiler_hard` to `node_compiler_set`
- [x] change `node_compiler_version_hard` to `node_compiler_version_set`

* concretizer: make rules on virtual packages more linear

fixes #20679

In this refactor we have a single cardinality rule on the
provider, which triggers a rule transforming a dependency
on a virtual package into a dependency on the provider of
the virtual.

* Remove hard-coded standard C++ library selection and add more releases in llvm package (#19933)

* Restore OS based Clang default choice of C++ standard library.

* Add LLVM 11.0.1 release

* fix mpi lib paths, add virtual provides (#20693)

* intel-oneapi-compilers package: correct module file (#20686)

This properly sets PATH/CPATH/LIBRARY_PATH etc. to make the
Spack-generated module file for intel-oneapi-compilers useful
(without this, 'icx' would not be found after loading the module
file for intel-oneapi-compilers).

* intel-oneapi-mpi: virtual provider support (#20732)

Set up environment and dependent packages properly when building
with intel-oneapi-mpi as a dependency MPI provider (e.g. point to
mpicc compiler wrapper).

* restore ability of dev-build to skip patches (#20351)

At some point in the past, the skip_patch argument was removed
from the call to package.do_install() this broke the --skip-patch
flag on the dev-build command.

* libyogrt: remove conflicts triggered by an invalid value (#20794)

fixes #20611

The conflict was triggered by an invalid value of the
'scheduler' variant. This causes Spack to error when libyogrt
facts are validated by the ASP-based concretizer.

* concretizer: dependency conditions cannot hold if package is external

fixes #20736

Before this one line fix we were erroneously deducing
that dependency conditions hold even if a package
was external.

This may result in answer sets that contain imposed
conditions on a node without the node being present
in the DAG, hence #20736.

* concretizer: require at least a dependency type to say the dependency holds

fixes #20784

Similarly to the previous bug, here we were deducing
conditions to be imposed on nodes that were not part
of the DAG.

* py-hovorod: fix typo on variant name in conflicts directive (#20906)

* [WIP] relocate.py: parallelize test replacement logic (#19690)

* sbang pushed back to callers;
star moved to util.lang

* updated unit test

* sbang test moved; local tests pass

Co-authored-by: Nathan Hanford <[email protected]>

* store sbang_install_path in buildinfo, use for subsequent relocation (#20768)

* Print groups properly for spack find -d (#20028)

* llvm: "master" branch is now "main" branch (#21411)

* add intel oneapi to compiler/pkg translations (#21448)

* adding environment to OneMKL packages so that examples will build (#21377)

* intel-oneapi-compilers: add  to LD_LIBRARY_PATH so that it finds libimf.so (#20717)

* add  to LD_LIBRARY_PATH so that it finds libimf.so

* amrex: fix handling of CUDA arch (#20786)

* amrex: fix handling of CUDA arch
* amrex: fix style
* amrex: fix bug
* Update var/spack/repos/builtin/packages/amrex/package.py
* Update var/spack/repos/builtin/packages/amrex/package.py

Co-authored-by: Axel Huebl <[email protected]>

* ecp-data-vis-sdk: Combine the vis and io SDK packages (#20737)

This better enables the collective set to be deployed togethor satisfying
eachothers dependencies

* r-sf: fix dependency error (#20898)

* improve documentation for Rocm (hip amd builds) (#20812)

* improve documentation

* astyle: Fix makefile for install parameter (#20899)

* llvm-doe: added new package (#20719)

The package contains duplicated code from llvm/package.py,
will supersede solve.

* r-e1071: added v1.7-4 (#20891)

* r-diffusionmap: added v1.2.0 (#20881)

* r-covr: added v3.5.1 (#20868)

* r-class: added v7.3-17 (#20856)

* py-h5py: HDF5_DIR is needed for ~mpi too (#20905)

For the `~mpi` variant, the environment variable `HDF5_DIR` is still required.  I moved this command out of the `+mpi` conditional.

* py-hovorod: fix typo on variant name in conflicts directive (#20906)

* fujitsu-fftw: Add new package (#20824)

* pocl: added v1.6 (#20932)

Made version 1.5 or lower conflicts with a64fx.

* PCL: add new package (#20933)

* r-rle: new package (#20916)

Common 'base' and 'stats' methods for 'rle' objects, aiming to make it
possible to treat them transparently as vectors.

* r-ellipsis: added v0.3.1 (#20913)

* libconfig: add build dependency on texinfo (#20930)

* r-flexmix: add v2.3-17 (#20924)

* r-fitdistrplus: add v1.1-3 (#20923)

* r-fit-models: add v0.64 (#20922)

* r-fields: add v11.6 (#20921)

* r-fftwtools: add v0.9-9 (#20920)

* r-farver: add v2.0.3 (#20919)

* r-expm: add v0.999-6 (#20918)

* cln: add build dependency on texinfo (#20928)

* r-expint: add v0.1-6 (#20917)

* r-envstats: add v2.4.0 (#20915)

* r-energy: add v1.7-7 (#20914)

* r-ellipse: add v0.4.2 (#20912)

* py-fiscalyear: add v0.3.0 (#20911)

* r-ecp: add v3.1.3 (#20910)

* r-plotmo: add v3.6.0 (#20909)

* Improve gcc detection in llvm. (#20189)

Co-authored-by: Tom Scogland <[email protected]>
Co-authored-by: Thomas Green <[email protected]>

* hatchet: updated urls (#20908)

* py-anuga: add new package (#20782)

* libvips: added v8.10.5 (#20902)

* libzmq: add platform conditions to libbsd dependency (#20893)

* r-dtw: add v1.22-3 (#20890)

* r-dt: add v0.17 (#20889)

* r-dosnow: add v1.0.19 (#20888)

* add version 1.0.16 to r-doparallel (#20886)

* add version 1.3.7 to r-domc (#20885)

* add version 0.9-15 to r-diversitree (#20884)

* add version 1.3-3 to r-dismo (#20883)

* add version 0.6.27 to r-digest (#20882)

* add version 1.5 to r-rngtools (#20887)

* add version 1.5.8 to r-dicekriging (#20877)

* add version 1.4.2 to r-httr (#20876)

* add version   1.28 to r-desolve (#20875)

* add version   2.2-5 to r-deoptim (#20874)

* add version   0.2-3 to r-deldir (#20873)

* add version   1.0.0 to r-crul (#20870)

* add version   1.1.0.1 to r-crosstalk (#20869)

* add version   1.0-1 to r-copula (#20867)

* add version 5.0.2 to r-rcppparallel (#20866)

* add version   2.0-1 to r-compositions (#20865)

* add version 0.4.10 to r-rlang (#20796)

* add version 0.3.6 to r-vctrs (#20878)

* amrex: add ROCm support (#20809)

* add version 2.0-0 to r-colorspace (#20864)

* add version 1.3-1 to r-coin (#20863)

* add version   0.19-4 to r-coda (#20862)

* add version 1.3.7 to r-clustergeneration (#20861)

* add version 0.3-58 to r-clue (#20860)

* add version 0.7.1 to r-clipr (#20859)

* add version 2.2.0 to r-cli (#20858)

* add version 0.4-3 to r-classint (#20857)

* add version 0.1.2 to r-globaloptions (#20855)

* add version 2.3-56 to r-chron (#20854)

* add version 0.4.10 to r-checkpoint (#20853)

* add version 2.0.0 to r-checkmate (#20852)

* add version 1.18.1 to r-catools (#20850)

* add version 1.2.2.2 to r-modelmetrics (#20849)

* add version 3.0-4 to r-cardata (#20847)

* add version 1.0.1 to r-caracas (#20846)

* r-lifecycle: new package at v0.2.0 (#20845)

* add version 3.0-10 to r-car (#20844)

* add version 3.4.5 to r-processx (#20843)

* add version 1.5-12.2 to r-cairo (#20842)

* add version 0.2.3 to r-cubist (#20841)

* add version 2.6 to r-rmarkdown (#20838)

* add version 1.2.1 to r-blob (#20819)

* add version 4.0.4 to r-bit (#20818)

* add version 2.4-1 to r-bio3d (#20816)

* add version 0.4.2.3 to r-bibtex (#20815)

* add version 3.1-4 to r-bayesm (#20807)

* add version 1.2.1 to r-backports (#20806)

* add version 2.0.3 to r-argparse (#20805)

* add version 5.4-1 to r-ape (#20804)

* add version 0.8-18 to r-amap (#20803)

* r-pixmap: added new package (#20795)

* zoltan: source code location change (#20787)

* refactor path logic

* added some paths to make compilers and libs discoverable

* add  to LD_LIBRARY_PATH so that it finds libimf.so
and cleanup PEP8

* refactor path logic

* adding paths to LIBRARY_PATH so compiler wrappers will find -lmpi

* added vals for CC=icx, CXX=icpx, FC=ifx to generated module

* back out changes to intel-oneapi-mpi, save for separate PR

* Update var/spack/repos/builtin/packages/intel-oneapi-compilers/package.py

path is joined in _ld_library_path()

Co-authored-by: Robert Cohn <[email protected]>

* set absolute paths to icx,icpx,ifx

* dang close parenthesis

Co-authored-by: Robert Cohn <[email protected]>
Co-authored-by: mic84 <[email protected]>
Co-authored-by: Axel Huebl <[email protected]>
Co-authored-by: Chuck Atkins <[email protected]>
Co-authored-by: darmac <[email protected]>
Co-authored-by: Danny Taller <[email protected]>
Co-authored-by: Tomoyasu Nojiri <[email protected]>
Co-authored-by: Shintaro Iwasaki <[email protected]>
Co-authored-by: Glenn Johnson <[email protected]>
Co-authored-by: Kelly (KT) Thompson <[email protected]>
Co-authored-by: Henrique Mendonça <[email protected]>
Co-authored-by: h-denpo <[email protected]>
Co-authored-by: Adam J. Stewart <[email protected]>
Co-authored-by: Thomas Green <[email protected]>
Co-authored-by: Tom Scogland <[email protected]>
Co-authored-by: Thomas Green <[email protected]>
Co-authored-by: Abhinav Bhatele <[email protected]>
Co-authored-by: a-saitoh-fj <[email protected]>
Co-authored-by: QuellynSnead <[email protected]>

* intel-oneapi-compilers/mpi: add module support (#20808)

Facilitate running intel-oneapi-mpi outside of Spack (set PATH,
LD_LIBRARY_PATH, etc. appropriately).

Co-authored-by: Robert Cohn <[email protected]>

* apple-clang: add correct path to compiler wrappers (#21662)

Follow-up to #17110

### Before
```bash
CC=/Users/Adam/spack/lib/spack/env/clang/clang; export CC
SPACK_CC=/usr/bin/clang; export SPACK_CC
PATH=...:/Users/Adam/spack/lib/spack/env/apple-clang:/Users/Adam/spack/lib/spack/env/case-insensitive:/Users/Adam/spack/lib/spack/env:...; export PATH
```

### After
```bash
CC=/Users/Adam/spack/lib/spack/env/clang/clang; export CC
SPACK_CC=/usr/bin/clang; export SPACK_CC
PATH=...:/Users/Adam/spack/lib/spack/env/clang:/Users/Adam/spack/lib/spack/env/case-insensitive:/Users/Adam/spack/lib/spack/env:...; export PATH
```

`CC` and `SPACK_CC` were being set correctly, but `PATH` was using the name of the compiler `apple-clang` instead of `clang`. For most packages, since `CC` was set correctly, nothing broke. But for packages using `Makefiles` that set `CC` based on `which clang`, it was using the system compilers instead of the compiler wrappers. Discovered when working on `[email protected]`.

An alternative fix would be to copy the symlinks in `env/clang` to `env/apple-clang`. Let me know if you think there's a better way to do this, or to test this.

* Resolve (post-cherry-picking) flake8 errors

* Update CHANGELOG and release version

* updates for new tutorial

update s3 bucket
update tutorial branch

* update tutorial public key

* respect -k/verify-ssl-false in _existing_url method (#21864)

* use package supplied autogen.sh (#20319)

* Python 3.10 support: collections.abc (#20441)

(cherry picked from commit 40a40e0265d6704a7836aeb30a776d66da8f7fe6)

* concretizer: simplify "fact" method (#21148)

The "fact" method before was dealing with multiple facts
registered per call, which was used when we were emitting
grounded rules from knowledge of the problem instance.

Now that the encoding is changed we can simplify the method
to deal only with a single fact per call.

(cherry picked from commit ba42c36f00fe40c047121a32117018eb93e0c4b1)

* Improve error message for inconsistencies in package.py (#21811)

* Improve error message for inconsistencies in package.py

Sometimes directives refer to variants that do not exist.
Make it such that:

1. The name of the variant
2. The name of the package which is supposed to have
   such variant
3. The name of the package making this assumption

are all printed in the error message for easier debugging.

* Add unit tests

(cherry picked from commit 7226bd64dc3b46a1ed361f1e9d7fb4a2a5b65200)

* Updates to support clingo-cffi (#20657)

* Support clingo when used with cffi

Clingo recently merged in a new Python module option based on cffi.

Compatibility with this module requires a few changes to spack - it does not automatically convert strings/ints/etc to Symbol and clingo.Symbol.string throws on failure.

manually convert str/int to clingo.Symbol types
catch stringify exceptions
add job for clingo-cffi to Spack CI
switch to potassco-vendored wheel for clingo-cffi CI
on_unsat argument when cffi

(cherry picked from commit 93ed1a410c4a202eab3a68769fd8c0d4ff8b1c8e)

* Run clingo-cffi tests in a container (#21913)

There clingo-cffi job has two issues to be solved:

1. It uses the default concretizer
2. It requires a package from https://test.pypi.org/simple/

The former can be fixed by setting the SPACK_TEST_SOLVER
environment variable to "clingo".

The latter though requires clingo-cffi to be pushed to a
more stable package index (since https://test.pypi.org/simple/
is meant as a scratch version of PyPI that can be wiped at
any time).

For the time being run the tests in a container. Switch back to
PyPI whenever a new official version of clingo will be released.

* repo: generalize "swap" context manager to also accept paths

The method is now called "use_repositories" and
makes it clear in the docstring that it accepts
as arguments either Repo objects or paths.

Since there was some duplication between this
contextmanager and "use_repo" in the testing framework,
remove the latter and use spack.repo.use_repositories
across the entire code base.

Make a few adjustment to MockPackageMultiRepo, since it was
stating in the docstring that it was supposed to mock
spack.repo.Repo and was instead mocking spack.repo.RepoPath.

(cherry picked from commit 1a8963b0f4c11c4b7ddd347e6cd95cdc68ddcbe0)

* Move context manager to swap the current store into spack.store

The context manager can be used to swap the current
store temporarily, for any use case that may need it.

(cherry picked from commit cb2c233a97073f8c5d89581ee2a2401fef5f878d)

* Move context manager to swap the current configuration into spack.config

The context manager can be used to swap the current
configuration temporarily, for any use case that may need it.

(cherry picked from commit 553d37a6d62b05f15986a702394f67486fa44e0e)

* bugfix for target adjustments on target ranges (#20537)

(cherry picked from commit 61c1b71d38e62a5af81b3b7b8a8d12b954d99f0a)

* Added a context manager to swap architectures

This solves a few FIXMEs in conftest.py, where
we were manipulating globals and seeing side
effects prior to registering fixtures.

This commit solves the FIXMEs, but introduces
a performance regression on tests that may need
to be investigated

(cherry picked from commit 4558dc06e21e01ab07a43737b8cb99d1d69abb5d)

* make `spack fetch` work with environments (#19166)

* make `spack fetch` work with environments
* previously: `spack fetch` required the explicit statement of
              the specs to be fetched, even when in an environment
* now: if there is no spec(s) provided to `spack fetch` we check
       if an environment is active and if yes we fetch all
       uninstalled specs.

* clingo: prefer master branch

Most people installing `clingo` with Spack are going to be doing it to
use the new concretizer, and that requires the `master` branch.

- [x] make `master` the default so we don't have to keep telling people
  to install `clingo@master`. We'll update the preferred version when
  there's a new release.

* Clingo: fix missing import (#21364)

* clingo: added a package with option for bootstrapping clingo (#20652)

* clingo/clingo-bootstrap: added a package with option for bootstrapping clingo

package builds in Release mode
uses GCC options to link libstdc++ and libgcc statically

* clingo-bootstrap: apple-clang options to bootstrap statically on darwin

* clingo: fix the path of the Python interpreter

In case multiple Python versions are in the same prefix
(e.g. when clingo is built against an external Python),
it may happen that the Python used by CMake does not
match the corresponding node in the current spec.

This is fixed here by defining "Python_EXECUTABLE"
properly as a hint to CMake.

* clingo: the commit for "spack" version has been updated.

* clingo: fix typo (#22444)

* clingo-bootstrap: account for cray platform (#22460)

(cherry picked from commit 138312efabd534fa42d1a16e172e859f0d2b5842)

* Bootstrap clingo from sources (#21446)

* Allow the bootstrapping of clingo from sources

Allow python builds with system python as external
for MacOS

* Ensure consistent configuration when bootstrapping clingo

This commit uses context managers to ensure we can
bootstrap clingo using a consistent configuration
regardless of the use case being managed.

* Github actions: test clingo with bootstrapping from sources

* Add command to inspect and clean the bootstrap store

 Prevent users to set the install tree root to the bootstrap store

* clingo: documented how to bootstrap from sources

Co-authored-by: Gregory Becker <[email protected]>
(cherry picked from commit 10e9e142b75c6ca8bc61f688260c002201cc1b22)

* bootstrap: account for platform specific configuration scopes (#22489)

This change accounts for platform specific configuration scopes,
like ~/.spack/linux, during bootstrapping. These scopes were
previously not accounted for and that was causing issues e.g.
when searching for compilers.

(cherry picked from commit 413c422e53843a9e33d7b77a8c44dcfd4bf701be)

* concretizer: unify logic for spec conditionals

This builds on #20638 by unifying all the places in the concretizer where
things are conditional on specs. Previously, we duplicated a common spec
conditional pattern for dependencies, virtual providers, conflicts, and
externals. That was introduced in #20423 and refined in #20507, and
roughly looked as follows.

Given some directives in a package like:

```python
depends_on("[email protected]+bar", when="@2.0+variant")
provides("mpi@2:", when="@1.9:")
```

We handled the `@2.0+variant` and `@1.9:` parts by generating generated
`dependency_condition()`, `required_dependency_condition()`, and
`imposed_dependency_condition()` facts to trigger rules like this:

```prolog
dependency_conditions_hold(ID, Parent, Dependency) :-
  attr(Name, Arg1)             : required_dependency_condition(ID, Name, Arg1);
  attr(Name, Arg1, Arg2)       : required_dependency_condition(ID, Name, Arg1, Arg2);
  attr(Name, Arg1, Arg2, Arg3) : required_dependency_condition(ID, Name, Arg1, Arg2, Arg3);
  dependency_condition(ID, Parent, Dependency);
  node(Parent).
```

And we handled `[email protected]+bar` and `mpi@2:` parts ("imposed constraints")
like this:

```prolog
attr(Name, Arg1, Arg2) :-
  dependency_conditions_hold(ID, Package, Dependency),
  imposed_dependency_condition(ID, Name, Arg1, Arg2).

attr(Name, Arg1, Arg2, Arg3) :-
  dependency_conditions_hold(ID, Package, Dependency),
  imposed_dependency_condition(ID, Name, Arg1, Arg2, Arg3).
```

These rules were repeated with different input predicates for
requirements (e.g., `required_dependency_condition`) and imposed
constraints (e.g., `imposed_dependency_condition`) throughout
`concretize.lp`. In #20638 it got to be a bit confusing, because we used
the same `dependency_condition_holds` predicate to impose constraints on
conditional dependencies and virtual providers. So, even though the
pattern was repeated, some of the conditional rules were conjoined in a
weird way.

Instead of repeating this pattern everywhere, we now have *one* set of
consolidated rules for conditions:

```prolog
condition_holds(ID) :-
  condition(ID);
  attr(Name, A1)         : condition_requirement(ID, Name, A1);
  attr(Name, A1, A2)     : condition_requirement(ID, Name, A1, A2);
  attr(Name, A1, A2, A3) : condition_requirement(ID, Name, A1, A2, A3).

attr(Name, A1)         :- condition_holds(ID), imposed_constraint(ID, Name, A1).
attr(Name, A1, A2)     :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2).
attr(Name, A1, A2, A3) :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2, A3).
```

this allows us to use `condition(ID)` and `condition_holds(ID)` to
encapsulate the conditional logic on specs in all the scenarios where we
need it. Instead of defining predicates for the requirements and imposed
constraints, we generate the condition inputs with generic facts, and
define predicates to associate the condition ID with a particular
scenario. So, now, the generated facts for a condition look like this:

```prolog
condition(121).
condition_requirement(121,"node","cairo").
condition_requirement(121,"variant_value","cairo","fc","True").
imposed_constraint(121,"version_satisfies","fontconfig","2.10.91:").
dependency_condition(121,"cairo","fontconfig").
dependency_type(121,"build").
dependency_type(121,"link").
```

The requirements and imposed constraints are generic, and we associate
them with their meaning via the id. Here, `dependency_condition(121,
"cairo", "fontconfig")` tells us that condition 121 has to do with the
dependency of `cairo` on `fontconfig`, and the conditional dependency
rules just become:

```prolog
dependency_holds(Package, Dependency, Type) :-
  dependency_condition(ID, Package, Dependency),
  dependency_type(ID, Type),
  condition_holds(ID).
```

Dependencies, virtuals, conflicts, and externals all now use similar
patterns, and the logic for generating condition facts is common to all
of them on the python side, as well. The more specific routines like
`package_dependencies_rules` just call `self.condition(...)` to get an id
and generate requirements and imposed constraints, then they generate
their extra facts with the returned id, like this:

```python
    def package_dependencies_rules(self, pkg, tests):
        """Translate 'depends_on' directives into ASP logic."""
        for _, conditions in sorted(pkg.dependencies.items()):
            for cond, dep in sorted(conditions.items()):
                condition_id = self.condition(cond, dep.spec, pkg.name)  # create a condition and get its id
                self.gen.fact(fn.dependency_condition(  # associate specifics about the dependency w/the id
                    condition_id, pkg.name, dep.spec.name
                ))
        # etc.
```

- [x] unify generation and logic for conditions
- [x] use unified logic for dependencies
- [x] use unified logic for virtuals
- [x] use unified logic for conflicts
- [x] use unified logic for externals

LocalWords:  concretizer mpi attr Arg concretize lp cairo fc fontconfig
LocalWords:  virtuals def pkg cond dep fn refactor github py

* bugfix: do not generate dep conditions when no dependency

We only consider test dependencies some of the time. Some packages are
*only* test dependencies. Spack's algorithm was previously generating
dependency conditions that could hold, *even* if there was no potential
dependency type.

- [x] change asp.py so that this can't happen -- we now only generate
      dependency types for possible dependencies.

* bugfix: allow imposed constraints to be overridden in special cases

In most cases, we want condition_holds(ID) to imply any imposed
constraints associated with the ID. However, the dependency relationship
in Spack is special because it's "extra" conditional -- a dependency
*condition* may hold, but we have decided that externals will not have
dependencies, so we need a way to avoid having imposed constraints appear
for nodes that don't exist.

This introduces a new rule that says that constraints are imposed
*unless* we define `do_not_impose(ID)`. This allows rules like
dependencies, which rely on more than just spec conditions, to cancel
imposed constraints.

We add one special case for this: dependencies of externals.

* spack location: bugfix for out of source build dirs (#22348)

* Channelflow: Fix the package. (#22483)

A search and replace went wrong in 2264e30d99d8b9fbdec8fa69b594e53d8ced15a1.

Thanks to @wadudmiah who reported this issue.

* Make SingleFileScope able to repopulate the cache after clearing it (#22559)

fixes #22547

SingleFileScope was not able to repopulate its cache before this
change. This was affecting the configuration seen by environments
using clingo bootstrapped from sources, since the bootstrapping
operation involved a few cache invalidation for config files.

* ASP-based solver: model disjoint sets for multivalued variants (#22534)

* ASP-based solver: avoid adding values to variants when they're set

fixes #22533
fixes #21911

Added a rule that prevents any value to slip in a variant when the
variant is set explicitly. This is relevant for multi-valued variants,
in particular for those that have disjoint sets of values.

* Ensure disjoint sets have a clear semantics for external packages

* clingo: modify recipe for bootstrapping (#22354)

* clingo: modify recipe for bootstrapping

Modifications:
- clingo builds with shared Python only if ^python+shared
- avoid building the clingo app for bootstrapping
- don't link to libpython when bootstrapping

* Remove option that breaks on linux

* Give more hints for the current Python

* Disable CLINGO_BUILD_PY_SHARED for bootstrapping

* bootstrapping: try to detect the current python from std library

This is much faster than calling external executables

* Fix compatibility with Python 2.6

* Give hints on which compiler and OS to use when bootstrapping

This change hints which compiler to use for bootstrapping clingo
(either GCC or Apple Clang on MacOS). On Cray platforms it also
hints to build for the frontend system, where software is meant
to be installed.

* Use spec_for_current_python to constrain module requirement

(cherry picked from commit d5fa509b072f0e58f00eaf81c60f32958a9f1e1d)

* Externals are preferred even when they have non-default variant values

fixes #22596

Variants which are specified in an external spec are not
scored negatively if they encode a non-default value.

* Enforce uniqueness of the version_weight atom per node

fixes #22565

This change enforces the uniqueness of the version_weight
atom per node(Package) in the DAG. It does so by applying
FTSE and adding an extra layer of indirection with the
possible_version_weight/2 atom.

Before this change it may have happened that for the same
node two different version_weight/2 were in the answer set,
each of which referred to a different spec with the same
version, and their weights would sum up.

This lead to unexpected result like preferring to build a
new version of an external if the external version was
older.

* bugfix for active when pkg is already active error (#22587)

* bugfix for active when pkg is already active error

Co-authored-by: Greg Becker <[email protected]>

* Fix clearing cache of InternalConfigScope (#22609)

Co-authored-by: Massimiliano Culpo <[email protected]>

* Bootstrap: add _builtin config scope (#22610)

(cherry picked from commit a37c916dff5a5c6e72f939433931ab69dfd731bd)

* Bootstrapping: swap store before configuration (#22631)

fixes #22294

A combination of the swapping order for global variables and
the fact that most of them are lazily evaluated resulted in
custom install tree not being taken into account if clingo
had to be bootstrapped.

This commit fixes that particular issue, but a broader refactor
may be needed to ensure that similar situations won't affect us
in the future.

* Remove erroneous warnings about quotes for from_source_file (#22767)

* "spack build-env" searches env for relevant spec (#21642)

If you install packages using spack install in an environment with
complex spec constraints, and the install fails, you may want to
test out the build using spack build-env; one issue (particularly
if you use concretize: together) is that it may be hard to pass
the appropriate spec that matches what the environment is
attempting to install.

This updates the build-env command to default to pulling a matching
spec from the environment rather than concretizing what the user
provides on the command line independently.

This makes a similar change to spack cd.

If the user-provided spec matches multiple specs in the environment,
then these commands will now report an error and display all
matching specs (to help the user specify).

Co-authored-by: Gregory Becker <[email protected]>

* ASP-based solver: assign OS correctly with inheritance from parent (#22896)

fixes #22871

When in presence of multiple choices for the operating system
we were lacking a rule to derive the node OS if it was
inherited.

* Externals with merged prefixes (#22653)

We remove system paths from search variables like PATH and 
from -L options because they may contain many packages and
could interfere with Spack-built packages. External packages 
may be installed to prefixes that are not actually system paths 
but are still "merged" in the sense that many other packages are
installed there. To avoid conflicts, this PR places all external
packages at the end of search paths.

* ASP-based solver: suppress warnings when constructing facts (#23090)

fixes #22786

Trying to get optimization flags for a specific target from
a compiler may trigger warnings. In the context of constructing
facts for the ASP-based solver we don't want to show these
warnings to the user, so here we simply ignore them.

* Use Python's built-in machinery to import compilers (#23290)

* Add "spack [cd|location] --source-dir" (#22321)

* spack location: fix usage without args (#22755)

* Import hooks using Python's built-in machinery (#23288)

The function we coded in Spack to load Python modules with arbitrary
names from a file seem to have issues with local imports. For
loading hooks though it is unnecessary to use such functions, since
we don't care to bind a custom name to a module nor we have to load
it from an unknown location.

This PR thus modifies spack.hook in the following ways:

- Use __import__ instead of spack.util.imp.load_source (this
  addresses #20005)
- Sync module docstring with all the hooks we have
- Avoid using memoization in a module function
- Marked with a leading underscore all the names that are supposed
  to stay local

* ASP-based solver: no intermediate package for concretizing together (#23307)

The ASP-based solver can natively manage cases where more than one root spec is given, and is able to concretize all the roots together (ensuring one spec per package at most).

Modifications:
- [x] When concretising together an environment the ASP-based solver calls directly its `solve` method rather than constructing a temporary fake root package.

* ASP-based solve: minimize compiler mismatches (#23016)

fixes #22718

Instead of trying to maximize the number of
matches (preferred behavior), try to minimize
the number of mismatches (unwanted behavior).

* performance: speed up existence checks in packages (#23661)

Spack doesn't require users to manually index their repos; it reindexes the indexes automatically when things change. To determine when to do this, it has to `stat()` all package files in each repository to make sure that indexes up to date with packages. We currently index virtual providers, patches by sha256, and tags on packages.

When this was originally implemented, we ran the checker all the time, at startup, but that was slow (see #7587). But we didn't go far enough -- it still consults the checker and does all the stat operations just to see if a package exists (`Repo.exists()`).  That might've been a wash in 2018, but as the number of packages has grown, it's gotten slower -- checking 5k packages is expensive and users see this for small operations.  It's a win now to make `Repo.exists()` check files directly.

**Fix:**

This PR does a number of things to speed up `spack load`, `spack info`, and other commands:

- [x] Make `Repo.exists()` check files directly again with `os.path.exists()` (this is the big one)
- [x] Refactor `Spec.satisfies()` so that a checking for virtual packages only happens if needed
      (avoids some calls to exists())
- [x] Avoid calling `Repo.exists(spec)` in `Repo.get()`. `Repo.get()` will ultimately try to load
      a `package.py` file anyway; we can let the failure to load it indicate that the package doesn't
      exist, and avoid another call to exists().
- [x] Fix up some comments in spec parsing
- [x] Call `UnknownPackageError` more consistently in `repo.py`

* Style fixes for v0.16.2 release

* Update CHANGELOG and release version for v0.16.2

* Update command to setup tutorial (#24488)

* Fix fetching for Python 3.9.6 (#24686)

When using Python 3.9.6, Spack is no longer able to fetch anything. Commands like `spack fetch` and `spack install` all break.

Python 3.9.6 includes a [new change](https://github.com/python/cpython/pull/25853/files#diff-b3712475a413ec972134c0260c8f1eb1deefb66184f740ef00c37b4487ef873eR462) that means that `scheme` must be a string, it cannot be None. The solution is to use an empty string like the method default.

Fixes #24644. Also see https://github.com/Homebrew/homebrew-core/pull/80175 where this issue was discovered by CI. Thanks @branchvincent for reporting such a serious issue before any actual users encountered it!

Co-authored-by: Todd Gamblin <[email protected]>

* clang/llvm: fix version detection (#19978)

This PR fixes two problems with clang/llvm's version detection. clang's
version output looks like this:

```
clang version 11.0.0
Target: x86_64-unknown-linux-gnu
```

This caused clang's version to be misdetected as:

```
[email protected]
Target:
```

This resulted in errors when trying to actually use it as a compiler.

When using `spack external find`, we couldn't determine the compiler
version, resulting in errors like this:

```
==> Warning: "[email protected]+clang+lld+lldb" has been detected on the system but will not be added to packages.yaml [reason=c compiler not found for [email protected]+clang+lld+lldb]
```

Changing the regex to only match until the end of the line fixes these
problems.

Fixes: #19473

* Fix use of quotes in Python build system (#22279)

* Cray: fix extracting paths from module files (#23472)

Co-authored-by: Tiziano Müller <[email protected]>

* Use AWS CloudFront for source mirror (#23978)

Spack's source mirror was previously in a plain old S3 bucket. That will still
work, but we can do better. This switches to AWS's CloudFront CDN for hosting
the mirror.

CloudFront is 16x faster (or more) than the old bucket.

- [x] change mirror to https://mirror.spack.io

* locks: only open lockfiles once instead of for every lock held (#24794)

This adds lockfile tracking to Spack's lock mechanism, so that we ensure that there
is only one open file descriptor per inode.

The `fcntl` locks that Spack uses are associated with an inode and a process.
This is convenient, because if a process exits, it releases its locks.
Unfortunately, this also means that if you close a file, *all* locks associated
with that file's inode are released, regardless of whether the process has any
other open file descriptors on it.

Because of this, we need to track open lock files so that we only close them when
a process no longer needs them.  We do this by tracking each lockfile by its
inode and process id.  This has several nice properties:

1. Tracking by pid ensures that, if we fork, we don't inadvertently track the parent
   process's lockfiles. `fcntl` locks are not inherited across forks, so we'll
   just track new lockfiles in the child.
2. Tracking by inode ensures that referencs are counted per inode, and that we don't
   inadvertently close a file whose inode still has open locks.
3. Tracking by both pid and inode ensures that we only open lockfiles the minimum
   number of times necessary for the locks we have.

Note: as mentioned elsewhere, these locks aren't thread safe -- they're designed to
work in Python and assume the GIL.

Tasks:
- [x] Introduce an `OpenFileTracker` class to track open file descriptors by inode.
- [x] Reference-count open file descriptors and only close them if they're no longer
      needed (this avoids inadvertently releasing locks that should not be released).

* Ensure all roots of an installed environment are marked explicit in db (#24277)

* docker: Fix CentOS 6 build on Docker Hub (#24804)

This change make yum usable again on CentOS 6

* docker: remove boto3 from  CentOS 6 since it requires and updated pip (#24813)

* Remove centos:6 image references

This was EOL November 30th, 2020. I believe the "builds" are failing on
develop because of it.

* Fix style tests

* Bump version and update changelog

Co-authored-by: Axel Huebl <[email protected]>
Co-authored-by: Danny Taller <[email protected]>
Co-authored-by: Greg Becker <[email protected]>
Co-authored-by: Adam J. Stewart <[email protected]>
Co-authored-by: Martin Aumüller <[email protected]>
Co-authored-by: Todd Gamblin <[email protected]>
Co-authored-by: Massimiliano Culpo <[email protected]>
Co-authored-by: George Hartzell <[email protected]>
Co-authored-by: MichaelLaufer <[email protected]>
Co-authored-by: michael laufer <[email protected]>
Co-authored-by: Andrew W Elble <[email protected]>
Co-authored-by: Harmen Stoppels <[email protected]>
Co-authored-by: Robert Maynard <[email protected]>
Co-authored-by: Massimiliano Culpo <[email protected]>
Co-authored-by: Tamara Dahlgren <[email protected]>
Co-authored-by: Scott Wittenburg <[email protected]>
Co-authored-by: Robert Cohn <[email protected]>
Co-authored-by: Ye Luo <[email protected]>
Co-authored-by: Frank Willmore <[email protected]>
Co-authored-by: Robert Underwood <[email protected]>
Co-authored-by: Henrique Mendonça <[email protected]>
Co-authored-by: Nathan Hanford <[email protected]>
Co-authored-by: Nathan Hanford <[email protected]>
Co-authored-by: eugeneswalker <[email protected]>
Co-authored-by: Yang Zongze <[email protected]>
Co-authored-by: mic84 <[email protected]>
Co-authored-by: Chuck Atkins <[email protected]>
Co-authored-by: darmac <[email protected]>
Co-authored-by: Tomoyasu Nojiri <[email protected]>
Co-authored-by: Shintaro Iwasaki <[email protected]>
Co-authored-by: Glenn Johnson <[email protected]>
Co-authored-by: Kelly (KT) Thompson <[email protected]>
Co-authored-by: h-denpo <[email protected]>
Co-authored-by: Thomas Green <[email protected]>
Co-authored-by: Tom Scogland <[email protected]>
Co-authored-by: Thomas Green <[email protected]>
Co-authored-by: Abhinav Bhatele <[email protected]>
Co-authored-by: a-saitoh-fj <[email protected]>
Co-authored-by: QuellynSnead <[email protected]>
Co-authored-by: Tamara Dahlgren <[email protected]>
Co-authored-by: Phil Tooley <[email protected]>
Co-authored-by: Josh Essman <[email protected]>
Co-authored-by: Andreas Baumbach <[email protected]>
Co-authored-by: Maxim Belkin <[email protected]>
Co-authored-by: Rémi Lacroix <[email protected]>
Co-authored-by: Cyrus Harrison <[email protected]>
Co-authored-by: Peter Scheibel <[email protected]>
Co-authored-by: Todd Gamblin <[email protected]>
Co-authored-by: Michael Kuhn <[email protected]>
Co-authored-by: Tiziano Müller <[email protected]>
@cdfh
Copy link
Copy Markdown

cdfh commented Jun 10, 2022

This change has caused me much stress. This is a silent change in behaviour, which I believe a project like spack should be strongly trying to avoid (frequent changes in behaviour, especially silent, anger users).

In addition, I would strongly argue that it is a bug to default against libc++ when the +libcxx variant is present: giving +libcxx strongly implies that the user wants libc++ support. But because you can't mix and match standard library implementations, even across libraries, it makes no sense to have libc++ support unless it's being used everywhere.

Consider my use case, which doesn't feel like it's unusual/odd:

  1. I compile llvm+libcxx+clang
  2. I discover/configure it with spack compiler find
  3. I install libraries with %clang
  4. I compile my own project, managed by spack (formerly using spack setup), with %clang

I want my own project to use libc++. But, it makes absolutely no sense for me to compile my project with -stdlib=libc++ because if I do, then all hell breaks loose due to mixing and matching standard libraries (any library my project depends upon will pull in libstdc++).

When libc++ was the default, the user could still override it in compilers.yaml or via -stdlib, so it's not like the original behaviour was overly restricting. But if you're not wanting to use it, why build llvm with +libcxx at all?

But regardless, the main thrust of my argument is that this change silently changed behaviour and that this will cost users a great deal of time and energy as their ecosystems break as a result. I would thus propose that this change is harmful and should be reverted.

@ye-luo
Copy link
Copy Markdown
Contributor Author

ye-luo commented Jun 10, 2022

If you build LLVM with clang and libcxx enabled via CMake directly, clang still default to use libstdc++, on Linux unless CMake option -DCLANG_DEFAULT_CXX_STDLIB=libc++ is explicitly set to request libc++. I believe LLVM has solid reason not to automatically default on using libc++. So we just followed LLVM choice. Similarly, you may install clang and libc++ from package manager and clang won't automatically pick libc++.

Enable libc++ in LLVM and using libc++ by default in clang are separate logic. +libcxx means libc++ is available to use but it is users' choice to opt-in when compiling codes. It is also bad to compile two versions of llvm+clang with the only difference of picking libstdc++ and libc++ as default in order to serve users of these two needs.

I think to address your issue, we can have a variant which exactly does setting -DCLANG_DEFAULT_CXX_STDLIB=libc++ on demand. In such away, we have serve all the three use cases

  1. clang select standard c++library based on llvm default, libc++ is compiled and available for use.
  2. clang select standard c++library based on llvm default, no libc++ being compiled. No need of another clang build, it is covered by 1 already.
  3. clang select libc++ as c++ library, libc++ is compiled.

@cdfh
Copy link
Copy Markdown

cdfh commented Jun 10, 2022

So we just followed LLVM choice.

That may be so, but it changed spack's behaviour and regressed a previous bug report (##5942, #5943). Spack has a large number of users, many using it in production environments. Changing its behaviour should be considered a major event not to be undertaken lightly. In this PR, there was another option: introduce a multi-valued stdlib variant, defaulting to the existing behaviour (which was actually your original proposal, which would be fine). Had this been done, no existing infrastructure's would have been broken.

I think to address your issue, we can have a variant which exactly does setting -DCLANG_DEFAULT_CXX_STDLIB=libc++ on demand. In such away, we have serve all the three use cases

The issue here isn't how to make it work (I've already addressed that locally). The issue is that the behaviour changed and that that resulted in breakages in at least one (and probably many more) spack deployment. (In my case, Qt suddenly failed to compile for cryptic reasons due to mismatches in the standard library; this change introduced build errors in qt%clang.) For me, the price has already been paid on that issue and there's no fixing it. But for others, it hasn't been paid yet and the solution is to revert back to the existing behaviour and introduce this PR in a way that doesn't change previous defaults.

@haampie
Copy link
Copy Markdown
Member

haampie commented Jun 10, 2022

But for others, it hasn't been paid yet

This was merged a year and a half ago and part of two 0.x releases, with no complaints. The previous default was arguably highly unexpected, I don't know any distro that defaults to libc++.

@cdfh
Copy link
Copy Markdown

cdfh commented Jun 10, 2022

Well, no, there's now been one complaint. And it presumably also affects the original poster of #5943, where the default behaviour was originally introduced under the oversight of spack's original author.

I don't update spack very often. Indeed, it has evidently been over a year and a half since I updated. Instead, I update individual packages in the package database when new versions are needed. The reason I don't update frequently is because each time I update spack, things break. Like this. I'm presumably not the only user who takes this workflow, so the fact that only two people are known to be affected over a 1.5 year period shouldn't be taken as evidence that problems haven't been more widespread.

@ye-luo
Copy link
Copy Markdown
Contributor Author

ye-luo commented Jun 10, 2022

Imposing rule like "Not making any breaking changes" just doesn't work in most software development although no breakage is always desired. I don't feel I'm in any position to discuss policy, let's move a bit to consider what we can do now. As indicated in the whole discussion history of this PR, the old behavior is not desired. I think adding a variant for setting libc++ as default can be a candidate to help users like @cdfh .

@cdfh
Copy link
Copy Markdown

cdfh commented Jun 10, 2022

Imposing rule like "Not making any breaking changes" just doesn't work in most software development

I can count the number of times that git, emacs, llvm, gcc, or cmake changed their behaviour in a way that affected my tooling on a single hand, so it is certainly possible. Very occasionally, cmake does, but when it does, it only does so after having first deprecated the original behaviour (giving verbal warnings at runtime) across multiple versions.

I would agree with your premise that backwards compatibility isn't feasible for general applications, but spack is tooling rather than an application. People write infrastructures around it. When spack changes, those infrastructures break. So even if it can't adhere to a strict policy of backwards compatibility, it should strive towards it as much as is feasible. I would argue that in this case, it would have been very feasible to satisfy @ye-luo's original feature request without breaking existing infrastructures, just by adding a new variant, as originally proposed, instead of changing the meaning of an existing variant.

But if rolling back this change isn't welcomed, then I don't propose any further changes as it is possible to work around this issue for users who want libc++ by modifying compilers.yaml. My complaint isn't really the functionality, but rather the change in behaviour.

@ye-luo ye-luo deleted the llvm-clang_cxx_stdlib branch June 10, 2022 14:21
@trws
Copy link
Copy Markdown
Contributor

trws commented Jun 10, 2022

@cdfh, I understand your position, but taking this position as a complaint about a breaking change a year and a half ago is not reasonable. If you would like to open an issue, or better pull request, to discuss having +libcxx use it by default for compiling c++ code with clang, please feel free to do so, but this is not the place for it.

I want my own project to use libc++. But, it makes absolutely no sense for me to compile my project with -stdlib=libc++ because if I do, then all hell breaks loose due to mixing and matching standard libraries (any library my project depends upon will pull in libstdc++).

That's true if you do it in your project, but not if you specify the flags to spack, which will build a new tree with those flags and generate a consistent environment for you.

In addition, I would strongly argue that it is a bug to default against libc++ when the +libcxx variant is present: giving +libcxx strongly implies that the user wants libc++ support. But because you can't mix and match standard library implementations, even across libraries, it makes no sense to have libc++ support unless it's being used everywhere.

Feel free to report this bug upstream on llvm, where this is what happens by default when libcxx is enabled. We are passing that behavior on, not producing it.

When libc++ was the default, the user could still override it in compilers.yaml or via -stdlib, so it's not like the original behaviour was overly restricting.

Exactly like it is now.

But if you're not wanting to use it, why build llvm with +libcxx at all?

Frequently I build with libcxx so that I can build a specific project with it, or statically link libcxx into a project that needs to have no visible stdlib dependency at runtime. I also do this as part of multi-stage compiler bootstrapping where I want to build a compiler that will use libstdc++ but will then be used to build another that will be built with libc++.

it presumably also affects the original poster of #5943, where the default behaviour was originally introduced under the oversight of spack's original author.

That bug was fixed, in a way, by the patch but only in that the resulting clang could run. The underlying issue was that rpathing for compiler standard libraries were not being handled correctly by spack yet, so building clang without libcxx would have created the same error the same way. He would not have had that issue today.

I would agree with your premise that backwards compatibility isn't feasible for general applications, but spack is tooling rather than an application. People write infrastructures around it. When spack changes, those infrastructures break. So even if it can't adhere to a strict policy of backwards compatibility, it should strive towards it as much as is feasible. I would argue that in this case, it would have been very feasible to satisfy @ye-luo's original feature request without breaking existing infrastructures, just by adding a new variant, as originally proposed, instead of changing the meaning of an existing variant.

That is a very fair point, and perhaps we should have done it that way. But I am not going to accept making another breaking change to fix a complaint about a breaking change. Doing so now would break far more people than leaving it because reuse support will cause recently built binaries to be reused more often and the issue will come up faster and more frequently.

@adamjstewart
Copy link
Copy Markdown
Member

Just adding my two cents:

Spack is still in beta release. Until we have a 1.X release, users should expect backwards-incompatible changes from time to time. For major things like removing versions/packages/commands, we always deprecate them first and remove them after at least one full release cycle. We don't yet have any way to deprecate variants, but I would like to add that.

In general, Spack has so many packages that it's almost impossible to avoid these kinds of changes in every package. In order to make packages more stable, we added a "maintainer" system where users can register themselves as willing to help maintain and test a particular package recipe. For example, I'm personally maintaining all Python libraries in Spack, particularly geospatial and ML packages. When a user opens an issue or PR relating to a package I maintain, I'm automatically notified and asked to review.

The core maintainers of Spack (the people with merge privileges) can't possibly know everything about every package in Spack. When someone proposes a change like this in a PR, a developer like me has to trust the "maintainers" of that build recipe to make the appropriate decision. If a package has no official "maintainer", I just merge PRs willy nilly even if they involve a change in default behavior.

TL;DR: if you have packages like LLVM that you rely on and you would be willing to help maintain those build recipes, I would encourage you to add yourself to the list of "maintainers" for that package.py so that you can add your opinions on these kind of changes.

@cdfh
Copy link
Copy Markdown

cdfh commented Jun 10, 2022

@trws

But I am not going to accept making another breaking change to fix a complaint about a breaking change.

That's fair enough, in which case please disregard my earlier proposal to do so.

However, I very much stand by that this is the place to raise the issue that this change should not have been made at the time, and I believe that point needs to be made in order to raise awareness of the need for greater stability. If nobody complains, then the spack maintainer community presumes everything is fine (which was exactly @haampie's position from earlier: nobody complained, therefore no foul).

I've made a considerable number of bugfixes to spack. But none of them have been shared upstream, which is tragic. The reason? My branch is often very far behind (as observed, over a year behind until this week). Once I track down and fix a spack bug (which is normally a very time-consuming process), I do not then have time to contribute upstream because doing so would require forward porting it to origin/HEAD, which after an already long bug hunt, I simply do not have time to do. If my branch was already at origin/HEAD (or close to it), then contributing back upstream would be easy. But previous efforts to keep my branch up to date resulted in way too frequent failures, and so now I do it very rarely (I have a snapshot that works, and only pull when I really, really need to).

So, I withdraw my proposals for reverting with respect to the current PR, but please take note: when behaviour changes unexpectedly, it breaks things for users and creates challenges that will prevent some users from contributing back to spack's development.

@adamjstewart
Copy link
Copy Markdown
Member

I've made a considerable number of bugfixes to spack. But none of them have been shared upstream, which is tragic. The reason? My branch is often very far behind (as observed, over a year behind until this week).

An approach I recommend is to have a "production copy" of Spack that sticks to stable releases and a "development copy" of Spack in your home directory that you keep up-to-date with develop. This worked for me for many years as a HPC sys admin at ANL.

@cdfh
Copy link
Copy Markdown

cdfh commented Jun 10, 2022

@adamjstewart

For major things like removing versions/packages/commands, we always deprecate them first and remove them after at least one full release cycle.

That is absolutely perfect and I support that. If infrastructure breaks because a functionality vanished, it's immediately recognised and pretty easily worked around. For example, we heavily relied upon spack setup, and when one day it vanished, it was well documented what the alternative was and it took minutes to recognise the problem and recover.

However, this change, by contrast, didn't remove functionality, but changed existing functionality in a way that had very subtle repercussions. Instead of breaking infrastructure in an obvious "command does not exist" way, it took literally days and a lot of stress to hunt down what was happening (qt was mixing standard library versions in a way that required manual interpretation of nm dumps to track down). Obviously, nobody anticipated at the time of creating this PR that it would cause such stress, but the point I'm making is that the original request suggested a fix that wouldn't have changed existing behaviour (a new variant), but this direction was abandoned because there are already a lot of variants and it would be cleaner/better to change behaviour. While from a design perspective that may be so, but from a software maintenance perspective I plead, please consider for a moment that any time functionality is changed, it will affect some percentage of users in esoteric ways, so please, if an alternative exists which is almost equally equivalent (such as in this case). do not turn away from said alternative lightly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants