Skip to content

Conversation

@Neeratyoy
Copy link
Contributor

What does this PR implement/fix? Explain your changes.

Adds a new example to fetch evaluations.

How should this PR be tested?

The examples/ folder has a new file named fetch_evaluations_tutorial.py.

Any other comments?

Currently, the example shows the following:

  • Fetch a task's evaluations
  • Create a CDF of predictive accuracy for all runs obtained
  • Create box plots to compare the predictive accuracy of the top 10 flows

Would like to have feedback on more additions to the example or plot enhancements.

Thanks.

@Neeratyoy Neeratyoy assigned Neeratyoy and mfeurer and unassigned Neeratyoy and mfeurer May 6, 2019
@Neeratyoy Neeratyoy requested a review from mfeurer May 6, 2019 15:43
@codecov-io
Copy link

codecov-io commented May 9, 2019

Codecov Report

Merging #688 into develop will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff            @@
##           develop     #688   +/-   ##
========================================
  Coverage    90.35%   90.35%           
========================================
  Files           36       36           
  Lines         3785     3785           
========================================
  Hits          3420     3420           
  Misses         365      365
Impacted Files Coverage Δ
openml/datasets/functions.py 95.66% <ø> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7129cf0...98fecdd. Read the comment docs.

@mfeurer mfeurer merged commit eec86a9 into develop May 13, 2019
@mfeurer mfeurer deleted the eval_example branch November 12, 2019 10:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants