Skip to content

Backports for 0.16.3#25991

Merged
becker33 merged 9 commits intospack:releases/v0.16from
haampie:backports/v0.16.3
Sep 20, 2021
Merged

Backports for 0.16.3#25991
becker33 merged 9 commits intospack:releases/v0.16from
haampie:backports/v0.16.3

Conversation

@haampie
Copy link
Copy Markdown
Member

@haampie haampie commented Sep 16, 2021

Included:

Not included:

Does not apply cleanly, can't really be backported without major changes or pulling in multiple concretizer refactoring prs:

Makes macOS tests fail with error reported below, so I'm hesitant to add any of these __reduce__ changes now:

When using Python 3.9.6, Spack is no longer able to fetch anything. Commands like `spack fetch` and `spack install` all break.

Python 3.9.6 includes a [new change](https://github.com/python/cpython/pull/25853/files#diff-b3712475a413ec972134c0260c8f1eb1deefb66184f740ef00c37b4487ef873eR462) that means that `scheme` must be a string, it cannot be None. The solution is to use an empty string like the method default.

Fixes spack#24644. Also see Homebrew/homebrew-core#80175 where this issue was discovered by CI. Thanks @branchvincent for reporting such a serious issue before any actual users encountered it!

Co-authored-by: Todd Gamblin <[email protected]>
@haampie
Copy link
Copy Markdown
Member Author

haampie commented Sep 16, 2021

@becker33 (done) / @adamjstewart / @alalazo can you have a look at the 'problematic' prs and if it's worth adding those? Pinging because you are the pr authors

no longer necessary

@haampie
Copy link
Copy Markdown
Member Author

haampie commented Sep 17, 2021

I've added the line-length flake8 change as a separate commit by myself, since #20384 is too big to backport, and without it style tests are failing because #20384 increased the max line length

michaelkuhn and others added 4 commits September 17, 2021 14:32
This PR fixes two problems with clang/llvm's version detection. clang's
version output looks like this:

```
clang version 11.0.0
Target: x86_64-unknown-linux-gnu
```

This caused clang's version to be misdetected as:

```
[email protected]
Target:
```

This resulted in errors when trying to actually use it as a compiler.

When using `spack external find`, we couldn't determine the compiler
version, resulting in errors like this:

```
==> Warning: "[email protected]+clang+lld+lldb" has been detected on the system but will not be added to packages.yaml [reason=c compiler not found for [email protected]+clang+lld+lldb]
```

Changing the regex to only match until the end of the line fixes these
problems.

Fixes: spack#19473
Spack's source mirror was previously in a plain old S3 bucket. That will still
work, but we can do better. This switches to AWS's CloudFront CDN for hosting
the mirror.

CloudFront is 16x faster (or more) than the old bucket.

- [x] change mirror to https://mirror.spack.io
@haampie
Copy link
Copy Markdown
Member Author

haampie commented Sep 17, 2021

One test failure on macOS:

=================================== FAILURES ===================================
_____________________________ test_default_queries _____________________________

database = <spack.database.Database object at 0x10b0f5940>

    def test_default_queries(database):
        # Testing a package whose name *doesn't* start with 'lib'
        # to ensure the library has 'lib' prepended to the name
        rec = database.get_record('zmpi')
    
        spec = rec.spec
    
>       libraries = spec['zmpi'].libs

lib/spack/spack/test/database.py:389: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
lib/spack/spack/spec.py:941: in __get__
    value = f()
lib/spack/spack/spec.py:933: in <lambda>
    callbacks_chain.append(lambda: self.default(self, instance, cls))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

descriptor = <spack.spec.ForwardQueryToPackage object at 0x107362280>
spec = [email protected]%[email protected] arch=test-debian6-core2 ^[email protected]%[email protected] arch=test-debian6-core2
cls = <class 'llnl.util.lang.Spec'>

    def _libs_default_handler(descriptor, spec, cls):
        """Default handler when looking for the 'libs' attribute.
    
        Tries to search for ``lib{spec.name}`` recursively starting from
        ``spec.prefix``. If ``spec.name`` starts with ``lib``, searches for
        ``{spec.name}`` instead.
    
        Parameters:
            descriptor (ForwardQueryToPackage): descriptor that triggered the call
            spec (Spec): spec that is being queried
            cls (type(spec)): type of spec, to match the signature of the
                descriptor ``__get__`` method
    
        Returns:
            LibraryList: The libraries found
    
        Raises:
            NoLibrariesError: If no libraries are found
        """
    
        # Variable 'name' is passed to function 'find_libraries', which supports
        # glob characters. For example, we have a package with a name 'abc-abc'.
        # Now, we don't know if the original name of the package is 'abc_abc'
        # (and it generates a library 'libabc_abc.so') or 'abc-abc' (and it
        # generates a library 'libabc-abc.so'). So, we tell the function
        # 'find_libraries' to give us anything that matches 'libabc?abc' and it
        # gives us either 'libabc-abc.so' or 'libabc_abc.so' (or an error)
        # depending on which one exists (there is a possibility, of course, to
        # get something like 'libabcXabc.so, but for now we consider this
        # unlikely).
        name = spec.name.replace('-', '?')
    
        # Avoid double 'lib' for packages whose names already start with lib
        if not name.startswith('lib'):
            name = 'lib' + name
    
        # If '+shared' search only for shared library; if '~shared' search only for
        # static library; otherwise, first search for shared and then for static.
        search_shared = [True] if ('+shared' in spec) else \
            ([False] if ('~shared' in spec) else [True, False])
    
        for shared in search_shared:
            libs = fs.find_libraries(
                name, spec.prefix, shared=shared, recursive=True)
            if libs:
                return libs
    
        msg = 'Unable to recursively locate {0} libraries in {1}'
>       raise spack.error.NoLibrariesError(msg.format(spec.name, spec.prefix))
E       spack.error.NoLibrariesError: Unable to recursively locate zmpi libraries in /private/var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/pytest-of-runner/pytest-0/mock_store0/test-debian6-core2/gcc-4.5.0/zmpi-1.0-ocvhmmabw3q7ezkozxcd7phdakzocf7l

lib/spack/spack/spec.py:872: NoLibrariesError
= 1 failed, 2587 passed, 128 skipped, 22 xfailed, 1 xpassed in 2033.14 seconds =

restarted it to see if it's just flaky

@haampie
Copy link
Copy Markdown
Member Author

haampie commented Sep 17, 2021

This error is persistent...

@haampie
Copy link
Copy Markdown
Member Author

haampie commented Sep 17, 2021

macOS failure is not fixed by yet another pickle-pr, so let's see if they pass without...

tgamblin and others added 4 commits September 17, 2021 23:31
…#24794)

This adds lockfile tracking to Spack's lock mechanism, so that we ensure that there
is only one open file descriptor per inode.

The `fcntl` locks that Spack uses are associated with an inode and a process.
This is convenient, because if a process exits, it releases its locks.
Unfortunately, this also means that if you close a file, *all* locks associated
with that file's inode are released, regardless of whether the process has any
other open file descriptors on it.

Because of this, we need to track open lock files so that we only close them when
a process no longer needs them.  We do this by tracking each lockfile by its
inode and process id.  This has several nice properties:

1. Tracking by pid ensures that, if we fork, we don't inadvertently track the parent
   process's lockfiles. `fcntl` locks are not inherited across forks, so we'll
   just track new lockfiles in the child.
2. Tracking by inode ensures that referencs are counted per inode, and that we don't
   inadvertently close a file whose inode still has open locks.
3. Tracking by both pid and inode ensures that we only open lockfiles the minimum
   number of times necessary for the locks we have.

Note: as mentioned elsewhere, these locks aren't thread safe -- they're designed to
work in Python and assume the GIL.

Tasks:
- [x] Introduce an `OpenFileTracker` class to track open file descriptors by inode.
- [x] Reference-count open file descriptors and only close them if they're no longer
      needed (this avoids inadvertently releasing locks that should not be released).
@becker33 becker33 merged commit 321e72c into spack:releases/v0.16 Sep 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants