Update Spack to develop branch#1
Merged
kmanalo merged 2237 commits intokmanalo:developfrom Jan 21, 2020
Merged
Conversation
* version bump modified: var/spack/repos/builtin/packages/py-slepc4py/package.py * slepc: update URL slepc4py: add 3.11.0 and update maintainers Co-authored-by: Satish Balay <[email protected]>
Add latest release.
…ks in comments. (#14281)
`ViewDescriptor.regenerate()` checks repeatedly whether packages are installed and also does a lot of DB queries. Put a read transaction around the whole thing to avoid repeatedly locking and unlocking the DB.
Lock transactions were actually writing *after* the lock was
released. The code was looking at the result of `release_write()` before
writing, then writing based on whether the lock was released. This is
pretty obviously wrong.
- [x] Refactor `Lock` so that a release function can be passed to the
`Lock` and called *only* when a lock is really released.
- [x] Refactor `LockTransaction` classes to use the release function
instead of checking the return value of `release_read()` / `release_write()`
If a write transaction was nested inside a read transaction, it would not
write properly on release, e.g., in a sequence like this, inside our
`LockTransaction` class:
```
1 with spack.store.db.read_transaction():
2 with spack.store.db.write_transaction():
3 ...
4 with spack.store.db.read_transaction():
...
```
The WriteTransaction on line 2 had no way of knowing that its
`__exit__()` call was the last *write* in the nesting, and it would skip
calling its write function.
The `__exit__()` call of the `ReadTransaction` on line 1 wouldn't know
how to write, and the file would never be written.
The DB would be correct in memory, but the `ReadTransaction` on line 4
would re-read the whole DB assuming that other processes may have
modified it. Since the DB was never written, we got stale data.
- [x] Make `Lock.release_write()` return `True` whenever we release the
*last write* in a nest.
Our `LockTransaction` class was reading overly aggressively. In cases
like this:
```
1 with spack.store.db.read_transaction():
2 with spack.store.db.write_transaction():
3 ...
```
The `ReadTransaction` on line 1 would read in the DB, but the
WriteTransaction on line 2 would read in the DB *again*, even though we
had a read lock the whole time. `WriteTransaction`s were only
considering nested writes to decide when to read, but they didn't know
when we already had a read lock.
- [x] `Lock.acquire_write()` return `False` in cases where we already had
a read lock.
Environments need to read the DB a lot when installing all specs. - [x] Put a read transaction around `install_all()` and `install()` to avoid repeated locking
`spack install` previously concretized, writes the entire environment out, regenerated views, then wrote and regenerated views again. Regenerating views is slow, so ensure that we only do that once. - [x] add an option to env.write() to skip view regeneration - [x] add a note on whether regenerate_views() shouldn't just be a separate operation -- not clear if we want to keep it as part of write to ensure consistency, or take it out to avoid performance issues.
`os.path.exists()` will report False if the target of a symlink doesn't exist, so we can avoid a costly call to realpath here.
`ViewDescriptor.regenerate()` was copying specs and stripping build dependencies, which clears `_hash` and other cached fields on concrete specs, which causes a bunch of YAML hashes to be recomputed. - [x] Preserve the `_hash` and `_normal` fields on stripped specs, as these will be unchanged.
`ViewDescriptor.regenerate()` calls `get_all_specs()`, which reads `spec.yaml` files, which is slow. It's fine to do this once, but `view.remove_specs()` *also* calls it immediately afterwards. - [x] Pass the result of `get_all_specs()` as an optional parameter to `view.remove_specs()` to avoid reading `spec.yaml` files twice.
…ling (#13789) * Some packages (e.g. mpfr at the time of this patch) can have patches with the same name but different contents (which apply to different versions of the package). This appends part of the patch hash to the cache file name to avoid conflicts. * Some exceptions which occur during fetching are not a subclass of SpackError and therefore do not have a 'message' attribute. This updates the logic for mirroring a single spec (add_single_spec) to produce an appropriate error message in that case (where before it failed with an AttributeError) * In various circumstances, a mirror can contain the universal storage path but not a cosmetic symlink; in this case it would not generate a symlink. Now "spack mirror create" will create a symlink for any package that doesn't have one.
When creating a cosmetic symlink for a resource in a mirror, remove it if it already exists. The symlink is removed in case the logic to create the symlink has changed.
The targets for the cosmetic paths in mirrrors were being calculated incorrectly as of fb3a3ba: the symlinks used relative paths as targets, and the relative path was computed relative to the wrong directory.
Since cache_mirror does the fetch itself, it also needs to do the checksum itself if it wants to verify that the source stored in the mirror is valid. Note that this isn't strictly required because fetching (including from mirrors) always separately verifies the checksum.
When updating a mirror, Spack was re-retrieving all patches (since the fetch logic for patches is separate). This updates the patch logic to allow the mirror logic to avoid this.
BundlePackages use a noop fetch strategy. The mirror logic was assuming that the fetcher had a resource to cach after performing a fetch. This adds a special check to skip caching if the stage is associated with a BundleFetchStrategy. Note that this should allow caching resources associated with BundlePackages.
Add version Octave 5.1.0 including sha256.
It seems that stable versions of perl also install a `perlX.Y.Z` binary. However, it seems that this binary can hang if used in conjunction with Spack's sbang workaround, as observed during automake's build.
This PR adds CudaPackage in order to pick up the cuda/compiler conflicts defined in CudaPackage.
* add new package : geode * remove provides for gemfire
to disabled use of libunwind. Without this mesa fails to build using recent Cray compilers - cce 9 and higher - on aarch64 systems. Signed-off-by: Howard Pritchard <[email protected]>
This updates the UnifyFS package to account for the latest 0.9.0 version and removes support for version 0.2.0.
* Fix use of sys.executable for module/env commands * Fix unit tests * More consistent quotation, less duplication * Fix import syntax
* Added new hashes for the protobuf and py-protobuf packates. * Fixed flake8
- The suite-sparse author publishes new versions starting with 5.5.0 on GitHub, see https://github.com/DrTimothyAldenDavis/SuiteSparse/releases and http://faculty.cse.tamu.edu/davis/SuiteSparse/ - change spack to download from there - updated sha256 checksums from GitHub for all available releases - For versions 5.4.0, 5.5.0, 5.6.0 there is a slightly different compilation necessary: first `make default` then `make install`. Summary of the version changes (+ added, -removed [because not available on GitHub]): ``` + 5.6.0 + 5.5.0 + 5.4.0 5.3.0 5.2.0 + 5.1.2 5.1.0 + 5.0.0 + 4.5.6 4.5.5 - 4.5.4 4.5.3 - 4.5.1 ```
Co-authored-by: mlhardware <[email protected]>
* Add +cfitsio variant to wcslib dependency * Replace ncurses dependency with readline dependency casacore explicitly may depend on readline, not ncurses * Add workaround for casacore's readline dependency casacore optionally depends upon readline, but it's CMakeLists.txt provides no user control over whether or not readline becomes a dependency. As readline is often present by default on systems, it's better for this package to explicitly depend on readline in order to prevent linking to whatever system version of the library happens to be found during the build process. This should be considered a workaround until casacore's CMakeLists.txt is fixed. * Apply workaround for casacore's dependency on SOFA Similar to the issues with casacore's readline dependency, casacore's optional dependency on SOFA does not provide the user with a means of controlling the dependency during build time. Unlike the readline library, the SOFA library is unlikely to exist on most systems by default. As the SOFA dependency is only optionally used for testing casacore, requiring it by default is not a good workaround. Until casacore's CMakeLists.txt is fixed, this variant has been removed to avoid unexpected library dependencies in the installed package. * Add newer casacore versions * Add mpokorny to maintainer field
* Added py-spdlog package * pleasing flake-8 * pleasing flake-8 * addressed some comments from adamjstewart * changed URL for archive * replaced with pypi.io url
Unfortunately UCX 1.7.0 is appearing in RPMS before it's officially released. There's a problem with Open MPI 4.0.x where x < 3 and this version of UCX, namely that the UCT BTL fails to compile. See open-mpi/ompi#7128 This patch works around the problem by disabling the build of the UCT BTL for releases 4.0.0 to 4.0.2. add hppritcha (me) as maintainer Signed-off-by: Howard Pritchard <[email protected]>
* capnproto: New package. * capnproto: Fix flake8 errors. * Remove characters invalid in Python 2.
…ly linked python (#14559)
Signed-off-by: Sean Smith <[email protected]>
* Add new kcov package * Fix linking error and add test
This PR adds the CudaPackage mixin class to py-theano. This replaces the `gpu` variant with the `cuda` variant.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.