-
-
Notifications
You must be signed in to change notification settings - Fork 2k
add remote-data to Travis tests #734
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Note that #739 contains a cherry-pick of the third commit here, so if should be merged before this (and this rebased). The reasoning is that this PR isn't a priority for the 0.2 release, but that specific fix should be in 0.2. |
|
This just seems like such a possible can of worms. If remote data tests or just take a long time for whatever reason isn't it possible for the tests to be killed by Travis if they hang for too long? Seems to me like it would be fine to enable this for Jenkins builds which are less frequent and which we have more control over. |
|
In principal that could happen, but the default timeout in |
|
Hm, strange. Of course, I could do what @iguananaut suggested and run the tests off of a few canned responses instead of relying on the HTTP requests? If you think that'd be good, I'm happy to fix that and file a PR. |
|
Doesn't it defeat the point of the test to not test it on real remote data? An alternative might be to have it try the remote version, and if the connection fails, have it use the fake data. Or have it try a few extra times if it fails the first time. Or just have it skip the test in the specific case of a timeout (instead of failing). I could modify |
|
@eteq Hm, that is a good point. I like your plan though: try the remote version and default to some cached response if that fails. I'll make this change and submit a PR, but one question is where to put that cached response? |
|
Depends on the size - if it's just a reasonable amount fo text, put it as a string in the code. Otherwise, you can use the package data stuff to store it in the |
|
@eteq - do you think it is still worth having this? If so, do you want to rebase and see how it fares? My feeling is that it introduces too much instability into Travis, so we could consider closing and just making sure that Jenkins runs the remote tests. |
|
Well, @astrofrog, part of the problem is that ever time I've gone back to try to work on this, Furthermore, we already have a variety of intermittent network errors on travis (e.g. for pypi), so if those dominate over any that might be introduced here, I think including remote-data is worthwhile. I'll rebase this now to only set the remote-data on the relevant tests, and do nothing else. We'll see how many errors that produces and try to resolve them and perhaps try to judge the instability from there? |
|
@eteq - things have changed a bit since this PR. In particular the builds are now a lot more stable, so I think the stochasticity that we'd introduce by activating remote tests is probably not worth it. Furthermore we already are testing the remote tests on https://ci.scipy.org - so shall we just go ahead and close this? |
|
@astrofrog - Probably, but there's still the remaining question of whether or not we want That said, you're right that the it's a bigger problem if those tests often fail for reasons outside our control - if that's true (and likely to remain true), then this should just be closed. But perhaps that has also changed in the year since this was first introduced? |
|
One possibility that I was thinking about is that we could add a pytest plugin to retry failed tests so that we could envisage adding a build with e.g. |
|
@astrofrog - yeah, that could work... although we may not want it to retry all failed tests, because that might dramatically slow down a travis run with real failures. Do you know how to go about implementing a |
|
I'm -1 on doing this on travis-ci. Instead add to the release notes that the remote-data tests should be run and devs should be encouraged to do it when the release candidate comes out. |
|
I'm with @cdeil on this, for now. |
|
I'd prefer to see something like #2126, which would test code currently tested in remote data tests, but using dummy local data and fake sockets. If that approach is taken though, it should also remain possible to run those tests with real sockets and real network connections, since it's also a valuable way to test against compatibility with actual online services. But I agree with @cdeil that that's something that should be done on occasion, perhaps manually, and not with every build on Travis. |
|
I agree, let's close this. |
This is a new take on #491, aiming at turning on the
--remote-dataoption for the Travis tests. It implements the suggestion by @astrofrog and @mdboom of using the astropy web site for the test web page, rather than google (which was doing some odd things with encoding). Note that this also required astropy/old-astropy-website-src@5fafaab which I already pushed up.This solves the problems encountered in #491, but it reveals a couple others. We'll see what this round of tests shows, but you can look at https://travis-ci.org/eteq/astropy/builds/4609134 for the results of a test build I ran of similar code.
I notice three problems there (the third should be fixed in this PR, but we'll see what Travis says):
find_api_pageseems to have problems with its request to getobjects.invfrom the astropy docs timing out. I'm not sure if this is transient or not - I haven't seen it on local tests, but I think @iguananaut mentioned this previously.coordinates.name_resolveseems to sometimes fail with a myseriousBadStatusLine: ''- I have no idea what that's about - @adrn, any ideas?TypeError: Type str doesn't support the buffer API- this is a py2 -> py3 problem infind_api_page, and I believe I fixed it with the last commit here. The travis tests of this PR will determine if so.1 and 2 seem to be intermittent, but due to the server rather than astropy itself. What do you all (esp. @mdboom @astrofrog who were involved in #491) think we should do about this?