PythonPackage: Let There Be Tests!#2869
Conversation
| setup = self.setup_file(self.spec, self.prefix) | ||
|
|
||
| python(setup, '--no-user-cfg', command, '--help', **kwargs) | ||
| return python.returncode == 0 |
There was a problem hiding this comment.
This seems kind of hacky, but I couldn't think of a better way to tell whether or not python setup.py test is available. I also don't like how it prints the test line when we run with spack install --verbose --run-tests. If we could make that configurable, it would be nice.
There was a problem hiding this comment.
Not sure how best to do this, but note that by default Executables raise an exception on error. You need to pass fail_on_error=False
There was a problem hiding this comment.
@adamjstewart: I think I'm past my daily intelligible github post limit 😄
|
Everything in the first part of the OP sounds reasonable. I agree... we need a
I would vote for always. Python packages aren't very useful if you can't import them, and it's MUCH quicker than real unit tests. Alternately, could we have a new kind of test phase that does import tests?
My guess is that will lead to dozens or hundreds of spurious problems, which will have to be worked out immediately. There's not much variation here to make such an implicit rule useful, IMHO. I think we should KISS: the Spack package provides a list of Python packages to be imported on the test. Yes, this would mean we will only achieve test coverage over time. But I think it's a clean approach, and it makes the original PR MUCH easier. Or do it right... root around in the package post-install, find all the packages, and try to import them. Or do it really right... Python |
|
@citibeth Those are all really useful ideas that I didn't think about! I agree that we should probably always run import tests. Let's look at the options we have:
As you've probably already noticed, I prefer auto-magic over explicit if it cuts down on work and lines of code. But yes, if I made it auto-magic, I would have to manually test every PythonPackage in Spack to make sure it doesn't break anything. Not ideal. Asking the user to provide a list of modules and only running the import tests if this list exists is definitely the easiest way to go about things.
This could have unforeseen consequences. For example, we don't want to pick up modules that are only used for testing that would have additional dependencies. We also wouldn't want to pick up any "development" or "internal-use-only" files.
Now this would be really nifty. I'm not sure how reliable it would be though. Some packages like setuptools do this, and we could get the list by running: But I'm also seeing the following option: but this list is empty for all of the packages I've tried. The way that setuptools.find_packages(exclude=['*.tests'])This might be the best option, although we run into two problems. Not all packages depend on setuptools, some use the builtin distutils, and I don't think distutils has this capability. Should we add setuptools as a dependency for all PythonPackages anyway? How do we prevent the dependency loop that would occur for |
|
Asking the user to provide a list of modules and only running the import
tests if this list exists is definitely the easiest way to go about things.
I would suggest limiting the scope of this PR to that. Once the basic
non-magic functionality is in, we can experiment with magic in a separate
PR. Or maybe make more than one PR to experiment with the consequences of
more than one kind of magic.
1. Root around in the package post-install, find all the packages, and
try to import them
This could have unforeseen consequences. For example, we don't want to
pick up modules that are only used for testing that would have additional
dependencies. We also wouldn't want to pick up any "development" or
"internal-use-only" files.
Not sure this is a problem. Do Python modules install code in the prefix
that is not intended to be used? Maybe they do...
Should we add setuptools as a dependency for all PythonPackages anyway?
No, it should only be a dependency for setuptools-dependent Python
packages. Unless you want to make a SetuptoolsPackage and DistutilsPackage
base class.
How do we prevent the dependency loop that would occur for py-setuptools
depending on itself?
By not using a base class for `py-setuptools` that automagically depends on
`py-setuptools`.
|
0f939c8 to
5aa9e5e
Compare
|
Ok, I implemented the simplest case. If |
|
Should I run: or: I'm not sure if the latter will catch more problems. I'm pretty sure I've been able to import |
|
Would they not both be the same in terms of testing?
I'm pretty sure I've been able to import matplotlib but not able to
import matplotlib.pyplot before.
No firm idea why. This is why the import tests you suggest are such a good
idea!
…On Thu, Jan 19, 2017 at 12:35 PM, Adam J. Stewart ***@***.***> wrote:
Should I run:
python -c 'import setuptools'
or:
python -c 'from setuptools import *'
I'm not sure if the latter will catch more problems. I'm pretty sure I've
been able to import matplotlib but not able to import matplotlib.pyplot
before.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2869 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AB1cd-jNWrTnDaLUg3ODga8IHsheH3xiks5rT57_gaJpZM4LneCd>
.
|
|
@alalazo How do I override default tests? In @PackageBase.sanity_check('build')
@PackageBase.on_package_attributes(run_tests=True)
def test(self):
self.setup_py('test')However, this doesn't work for numpy. In the @PythonPackage.sanity_check('build')
@PythonPackage.on_package_attributes(run_tests=True)
def test(self):
passbut it is still running |
|
@adamjstewart In |
|
@alalazo I tried that too. With: @PackageBase.sanity_check('build')
@PackageBase.on_package_attributes(run_tests=True)
def test(self):
...in def test(self):
passin |
|
@adamjstewart You miss indirection. In fn = getattr(self, 'check')At that point you can override |
|
Thanks @alalazo, I didn't notice that. I wonder if there's a better way to do that... Well, I can't think of one off the top of my head, so I'll steal your |
|
Now that I think about it, the easiest way to not have to use something like |
|
@adamjstewart The easiest imho is to have only one version of |
|
@alalazo Why did you do a try/except for that method? Was it before you added a default |
I think your guess is right, if I remember well how it evolved. But I would leave it like it is: extra safety is always welcome! 😄 |
| # >>> import setuptools | ||
| # >>> setuptools.find_packages() | ||
| # | ||
| # in the source tarball directory |
There was a problem hiding this comment.
Is there some way to associate a docstring with a particular attribute in a class? I would like this information to show up in the API docs.
| 'scipy.sparse.linalg.eigen', 'scipy.sparse.linalg.isolve', | ||
| 'scipy.sparse.linalg.eigen.arpack', 'scipy.sparse.linalg.eigen.lobpcg', | ||
| 'scipy.special._precompute' | ||
| ] |
There was a problem hiding this comment.
Is there a better way to do this? This list is the output of:
>>> import setuptools
>>> setuptools.find_packages()
|
|
||
| # FIXME: Is this really necessary? | ||
| # Known not to work with 2.23, 2.25 | ||
| # depends_on('[email protected]:', type='build') |
There was a problem hiding this comment.
We've discussed this before, but things like binutils and curl aren't really dependencies of any particular package. They are necessary for every package in Spack. In a way, they are Spack dependencies. The only reason people have been adding them is that certain packages require newer versions of these dependencies than are found on their ancient systems. We added a note about this in the documentation. I think this dependency should be removed. It won't even build on macOS.
| # scipy from its source directory; please exit the scipy | ||
| # source tree, and relaunch your python interpreter from there. | ||
| with working_dir('..'): | ||
| python('-c', 'import scipy; scipy.test("full", verbose=2)') |
There was a problem hiding this comment.
By default, numpy and scipy run a quick test suite. If you pass the "full" label, they run the full test suite. This can take quite a long time (17 minutes for scipy on my Mac), but the good news is that ALL THE TESTS PASS! Should we run the full test suite or just the quick test suite? Neither will be run unless the user passes --run-tests anyway.
There was a problem hiding this comment.
My vote is for full test suite if --run-tests is passed.
| # FIXME: Is there any way around this problem? | ||
| # depends_on('py-pytest-flake8', type='test') | ||
| # depends_on('[email protected]:', type='test') | ||
| # depends_on('py-mock', when='^python@:3.2', type='test') |
There was a problem hiding this comment.
One constraint I want to put on the 'test' deptype is that both of the following commands would install the same package with the same hash:
spack install py-setuptools
spack install --run-tests py-setuptools
An unfortunate consequence of this is that we have no means of working around dependency loops. We can't do something similar to #2548 if we want to maintain this constraint, because the test dependency would have the same spec as the package that indirectly depends on it.
I don't think there is any good way around this problem. Let me now if you can think of anything better though.
There was a problem hiding this comment.
P.S. The package tests that come with setuptools don't even pass, even when pip installs all of the required dependencies. The annoying thing about this is that if a user runs spack install --run-tests py-numpy, it will always fail when trying to build py-setuptools. Also, even if they did succeed, I'm worried that spack activate will copy these test dependencies to the Python installation, causing conflicts when trying to activate py-pytest or py-mock separately.
|
@tgamblin I think this is ready for review. It won't work correctly until a test deptype is added, but it makes a lot of other modifications to numpy and scipy that I would like to get merged. Otherwise, I'll end up duplicating them in another PR. |
|
@alalazo I just updated this to use your new |
|
Moved some of the unrelated changes to #3195 to make this PR easier to review. I'll remove them from this PR in a minute. |
|
Ping @tgamblin. This PR won't be incredibly useful until we have a |
* Run python setup.py test if --run-tests * Attempt to import the Python module after installation * Add testing support to numpy and scipy * Remove duplicated comments * Update to new run-tests callback methodology * Remove unrelated changes for another PR
* Run python setup.py test if --run-tests * Attempt to import the Python module after installation * Add testing support to numpy and scipy * Remove duplicated comments * Update to new run-tests callback methodology * Remove unrelated changes for another PR
* Run python setup.py test if --run-tests * Attempt to import the Python module after installation * Add testing support to numpy and scipy * Remove duplicated comments * Update to new run-tests callback methodology * Remove unrelated changes for another PR
* Run python setup.py test if --run-tests * Attempt to import the Python module after installation * Add testing support to numpy and scipy * Remove duplicated comments * Update to new run-tests callback methodology * Remove unrelated changes for another PR
Resolves #1305
The goal of this PR is to add a better testing framework for our Python packages. I see two low-hanging fruit that we can use for this:
python setup.py testif availableMost Python modules (maybe only the ones that use setuptools?) implement some kind of a "test" command that can be run after "build" and before "install". This provides basic package tests, implemented by the developers.
The main thing holding this up is that we really need a "test" deptype (see #1279). For example,
py-setuptoolshas no dependencies, aside from Python. But the test suite requires:I certainly don't want to add these as build dependencies, because they will make all package installations significantly more complex. I only want to build these dependencies if the user supplies
--run-tests. And I don't want these dependencies included in the hash.spack install --run-tests py-numpyandspack install py-numpyshould not be two separate installations.As for the import tests after installation, do we want to always run these (like the sanity_check_prefix method), or only with
--run-tests? Here is what I'm envisioning. By default, Spack tries to import the name of the packages minus the "py-". We will also need a method to override this. For example, the import name forpy-meep~mpiismeep, but the import name forpy-meep+mpiismeep_mpi. This will need to be tested on a per-package basis, but for the most part, dropping "py-" should give us the import name for most Python packages.