Skip to content

Conversation

@sahithyaravi
Copy link
Member

Reference Issue

Fixes #838

What does this PR implement/fix? Explain your changes.

fix list_evaluations_setups to work for evaluations that are not a 100 multiple

How should this PR be tested?

df = openml.evaluations.list_evaluations_setups(function='predictive_accuracy', flow=[5891], task=[37], output_format='dataframe', parameters_in_separate_columns=True)

len(df) #2429

Any other comments?

changed the existing test to get a size that is not a multiple of 100

@codecov-io
Copy link

codecov-io commented Oct 17, 2019

Codecov Report

❗ No coverage uploaded for pull request base (develop@35dd7d3). Click here to learn what that means.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##             develop     #846   +/-   ##
==========================================
  Coverage           ?   88.74%           
==========================================
  Files              ?       37           
  Lines              ?     4942           
  Branches           ?        0           
==========================================
  Hits               ?     4386           
  Misses             ?      556           
  Partials           ?        0
Impacted Files Coverage Δ
openml/evaluations/functions.py 92.23% <100%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 35dd7d3...b022b20. Read the comment docs.

@mfeurer mfeurer merged commit c59c3b8 into develop Oct 17, 2019
@mfeurer mfeurer deleted the fix_838 branch October 17, 2019 12:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

list_evaluations_setup() fails when number of evaluations not a multiple of 100

4 participants