Conversation
|
AppVeyor is failing in the install process somewhere, which is not touched by this PR. |
|
Thanks for working on this Kai! It will definitely be very useful to have. I had not used I agree using the latest packages from upstream is reasonable for development purposes. In that case I guess the workflow would be to run asv on a proposed PR vs. master to check that no unexpected performance regressions are being introduced. |
|
+1 for using the latest released versions of numpy and cython. |
Is a pip issue, looks like it's fixed by EDIT: or we just ignore the failure; not like something will break by adding |
| @@ -0,0 +1,57 @@ | |||
| import numpy as np | |||
| import pywt | |||
There was a problem hiding this comment.
I think for newer functions the way to write the import in asv is:
try:
from pywt import dwt, idwt
except ImportError:
pass
Otherwise when you run the benchmarks for newly added functions against an older pywt version that doesn't have them, the tests crash. Obviously dwt, idwt have existed for a while, but if we're going to copy this as a template for other benchmarks, maybe good to keep the style uniform.
See https://github.com/scipy/scipy/blob/master/benchmarks/README.rst#writing-benchmarks for this and other benchmarking pointers to steal for a basic README here.
works as expected. Merging this PR without sorting out hosting etc. first would be fine with me. |
|
My main concern with this at the moment is that ASV is incredibly slow. It takes about 20 minutes to run Is anybody else encountering this? |
|
I have not tried with |
|
That matches my experience, most of the time seems to be spent in initialization rather than actual timing. This is possibly still useful to control for regressions, but it's too slow to get any kind of feedback during development so I'd prefer another alternative. There is a module for pytest here, so I'll check that out. If anybody else has ideas let me know. |
|
initialization may be a little faster if you are able to use |
|
Since the envs don't get cleaned up even with |
|
I think it would help to steal |
|
Yeah, 5 minutes would be a reasonable time for full benchmarks. What do you get with |
0271143 to
2a76224
Compare
5bd52bb to
42b7033
Compare
Current coverage is 86.86% (diff: 100%)@@ master #158 diff @@
==========================================
Files 17 17
Lines 2216 2216
Methods 0 0
Messages 0 0
Branches 242 242
==========================================
Hits 1925 1925
+ Misses 245 244 -1
- Partials 46 47 +1
|
No benchmarks setup, but ASV builds correctly
Interestingly, 1D dwt is significantly slower on python 3.
Benchmarking these functions should hopefully give a closer view of how performance in the C core changes without requiring a lot of effort to instrument them separately. The different modes should have somewhat different performance, especially for small sizes/large filters. If this doesn't happen, there is too much overhead to benchmark the convolution directly and this needs to be done separately. modes are only set for 1D, since n-dimensional path is not affected by the mode (it is just passed through to the 1D).
|
While we're in PR merging mode: would be good to get this in. We can worry about a site for displaying results and fine-tuning later, but once we have the beginning then new PRs can add benchmarks quite easily for new functions or performance improvements.
To get this in, I suggest:
|
|
The full run indeed takes forever, not yet sure why. |
|
|
||
| // The Pythons you'd like to test against. If not provided, defaults | ||
| // to the current version of Python used to run `asv`. | ||
| "pythons": ["2.7", "3.4"], |
There was a problem hiding this comment.
I'd remove this to start with, cuts down on runtime and build issues. The benchmarking results won't really depend on Python version in an interesting/significant way.
| pywt.dwt(self.data, wavelet, mode) | ||
|
|
||
| def time_dwt_ext(self, n, wavelet, mode): | ||
| pywt._extensions._dwt.dwt_single(self.data, self.wavelet, self.mode) |
There was a problem hiding this comment.
This probably shouldn't be in a benchmark - only need/want to benchmark public functions/methods.
There was a problem hiding this comment.
same comment applies to time_idwt_ext
| // If missing or the empty string, the tool will be automatically | ||
| // determined by looking for tools on the PATH environment | ||
| // variable. | ||
| "environment_type": "virtualenv", |
There was a problem hiding this comment.
better not define this. I'd expect the majority of Windows users to have better luck with conda.
There was a problem hiding this comment.
If we comment out the line, we should mention that the user should uncomment and set to either conda or virtualenv. When I had it commented out altogether asv issued a warning stating that this field would be required in a future release.
|
just tried this again using asv 0.2. |
|
|
||
| class SwtTimeSuiteBase(object): | ||
| """ | ||
| Set-up for (I)DWT timing. |
|
continued in gh-264, so closing this PR. |
Adds a basic benchmarking suite, hopefully giving an idea of how to write something similar for other components. Still a number of things to do, tracking in #136