Skip to content

Conversation

@Neeratyoy
Copy link
Contributor

What does this PR implement/fix? Explain your changes.

Owing to the concurrent runs, combined with deletion of files at the end of unit test modules, some tests may fail. Rerunning them should ensure they pass.

@codecov-io
Copy link

codecov-io commented Jul 25, 2019

Codecov Report

Merging #748 into develop will decrease coverage by 0.27%.
The diff coverage is 100%.

Impacted file tree graph

@@             Coverage Diff             @@
##           develop     #748      +/-   ##
===========================================
- Coverage    88.06%   87.79%   -0.28%     
===========================================
  Files           36       36              
  Lines         4090     4096       +6     
===========================================
- Hits          3602     3596       -6     
- Misses         488      500      +12
Impacted Files Coverage Δ
openml/testing.py 89.06% <100%> (-6.24%) ⬇️
openml/config.py 89.87% <0%> (-2.54%) ⬇️
openml/_api_calls.py 87.01% <0%> (-1.3%) ⬇️
openml/runs/functions.py 82.07% <0%> (-0.29%) ⬇️
openml/evaluations/functions.py 92.17% <0%> (+2.62%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e6ee09d...d635500. Read the comment docs.

@Neeratyoy
Copy link
Contributor Author

@mfeurer, @PGijsbers
If I would like to import a new package for the test cases, isn't it enough to add to ci_scripts/install.sh here.

It looks like the appveyor build failed to install it. How can I fix that?

@Neeratyoy Neeratyoy marked this pull request as ready for review August 1, 2019 17:19
@Neeratyoy Neeratyoy changed the title Adding flaky rerun decorators for stochastic failures of unit tests Making unit tests green Aug 1, 2019
@mfeurer
Copy link
Collaborator

mfeurer commented Aug 2, 2019

Appveyor doesn't use install.sh. However, I doubt that posix-ipc will work on Windows, see for example the description of the project here. Probably we should discuss this offline on Monday.

@PGijsbers
Copy link
Collaborator

I hope you can figure something out. But to answer the original question, if you want to add test dependencies, it's best to add it in setup.py.

@Neeratyoy
Copy link
Contributor Author

@mfeurer
if we delete all the files for each of the workers independently and asynchronously, test_list_datasets_with_high_size_parameter fails.

@mfeurer
Copy link
Collaborator

mfeurer commented Aug 6, 2019

That's unfortunate. Maybe we should execute that test on the live server?

@Neeratyoy
Copy link
Contributor Author

That's unfortunate. Maybe we should execute that test on the live server?

I guess you mean a new instance of an OpenML server every test run?
Will that interfere with the fetch requests to the test server, that is, by having a new openml.config.server value.

@mfeurer
Copy link
Collaborator

mfeurer commented Aug 6, 2019

@Neeratyoy
Copy link
Contributor Author

aah you are okay with the dataset list being fetched from the prod server
alright, I was thinking we shouldn't be touching that during test but this is a read-only operation
makes sense
thanks

@PGijsbers
Copy link
Collaborator

We do it at several points in our unit tests. But your general intuition is right, only do it if you have to. In particular never alter the production server and think twice if you really need the production server if it's a demanding query. In this case for the reasons you pointed out it makes sense.

@Neeratyoy
Copy link
Contributor Author

@mfeurer
all passed, except one unit test that gave:
openml.exceptions.OpenMLServerException: An Elastic Search Exception occured

@Neeratyoy Neeratyoy requested a review from mfeurer August 7, 2019 11:40
@Neeratyoy
Copy link
Contributor Author

Thanks!

Copy link
Collaborator

@mfeurer mfeurer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this, this is much simpler now. I left a few comments on how I think this could be improved.

logger = logging.getLogger("unit_tests_publish")
logger.setLevel(logging.DEBUG)
fh = logging.FileHandler('UploadedFiles.log')
# fh.setLevel(logging.DEBUG)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it actually necessary to introduce a logger which logs to a file? Also, could you please remove that comment?

from openml.testing import TestBase

# creating logger for unit test file deletion status
logger = logging.getLogger("unit_tests")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks a lot like a copy of the logger above. Also, I assume that the logger names should be swapped.

# creating logger for unit test file deletion status
logger = logging.getLogger("unit_tests")
logger.setLevel(logging.DEBUG)
fh = logging.FileHandler('UnitTestDeletion.log')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above, do we really need to save this to disk?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, no.
I was tailing during the testing and this helped.
Will have them removed.

openml.config.server = TestBase.test_server
openml.config.apikey = TestBase.apikey

# legal_entities defined in openml.utils._delete_entity() - {'user'}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these comments still needed?

@mfeurer mfeurer merged commit 4b84dc6 into develop Aug 7, 2019
@mfeurer mfeurer deleted the fix_unit_test_list_dataset branch August 7, 2019 14:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants