-
-
Notifications
You must be signed in to change notification settings - Fork 211
Making unit tests green #748
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## develop #748 +/- ##
===========================================
- Coverage 88.06% 87.79% -0.28%
===========================================
Files 36 36
Lines 4090 4096 +6
===========================================
- Hits 3602 3596 -6
- Misses 488 500 +12
Continue to review full report at Codecov.
|
|
@mfeurer, @PGijsbers It looks like the appveyor build failed to install it. How can I fix that? |
|
Appveyor doesn't use |
|
I hope you can figure something out. But to answer the original question, if you want to add test dependencies, it's best to add it in |
|
@mfeurer |
|
That's unfortunate. Maybe we should execute that test on the live server? |
I guess you mean a new instance of an OpenML server every test run? |
|
Sorry, I don't get that. What I meant is like https://github.com/openml/openml-python/blob/develop/tests/test_datasets/test_dataset_functions.py#L213 |
|
aah you are okay with the dataset list being fetched from the prod server |
|
We do it at several points in our unit tests. But your general intuition is right, only do it if you have to. In particular never alter the production server and think twice if you really need the production server if it's a demanding query. In this case for the reasons you pointed out it makes sense. |
|
@mfeurer |
|
Thanks! |
mfeurer
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this, this is much simpler now. I left a few comments on how I think this could be improved.
openml/testing.py
Outdated
| logger = logging.getLogger("unit_tests_publish") | ||
| logger.setLevel(logging.DEBUG) | ||
| fh = logging.FileHandler('UploadedFiles.log') | ||
| # fh.setLevel(logging.DEBUG) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it actually necessary to introduce a logger which logs to a file? Also, could you please remove that comment?
| from openml.testing import TestBase | ||
|
|
||
| # creating logger for unit test file deletion status | ||
| logger = logging.getLogger("unit_tests") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks a lot like a copy of the logger above. Also, I assume that the logger names should be swapped.
tests/conftest.py
Outdated
| # creating logger for unit test file deletion status | ||
| logger = logging.getLogger("unit_tests") | ||
| logger.setLevel(logging.DEBUG) | ||
| fh = logging.FileHandler('UnitTestDeletion.log') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above, do we really need to save this to disk?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, no.
I was tailing during the testing and this helped.
Will have them removed.
tests/conftest.py
Outdated
| openml.config.server = TestBase.test_server | ||
| openml.config.apikey = TestBase.apikey | ||
|
|
||
| # legal_entities defined in openml.utils._delete_entity() - {'user'} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these comments still needed?
What does this PR implement/fix? Explain your changes.
Owing to the concurrent runs, combined with deletion of files at the end of unit test modules, some tests may fail. Rerunning them should ensure they pass.