Skip to content
This repository was archived by the owner on Feb 28, 2024. It is now read-only.

Conversation

@glouppe
Copy link
Member

@glouppe glouppe commented Apr 5, 2016

:)

@glouppe
Copy link
Member Author

glouppe commented Apr 5, 2016

This adds some more tests. I'll merge if Travis is happy.

yield (check_minimize, bench2, -5, [[-6, 6]], 0.05, 10000)
yield (check_minimize, bench3, -0.9, [[-2, 2]], 0.05, 10000)
yield (check_minimize, branin, 0.39, [[-5, 10], [0, 15]], 0.1, 10000)
yield (check_minimize, hart6, -3.32, np.tile((0, 1), (6, 1)), 0.5, 10000)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why bother making it complicated when this passes? :( @betatim @MechCoder

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great question.

Well, these are "simple" functions. We should investigate complicated functions where we can gain some information from the data, for instance LogisticRegression, where we can choose hyperparameters close to those that have high regularization.

LogisticRegression might be a bad example because again we have to specify the scale in which we need to query the parameters.

@glouppe glouppe merged commit f215f6e into scikit-optimize:master Apr 5, 2016
@betatim
Copy link
Member

betatim commented Apr 5, 2016

Less iterations? I think that is the big selling point.

Does it make sense to add a test (or should it be a benchmark?) that puts the "clever" algorithms in a race against dummy_minimize to see which one converges faster? If implemented as a test you would fail if you need (statistically significant) more iterations than dummy_minimize.

Vaguely remember from the "Random Grid Search" Bergstra and company paper that they conclude that random search is pretty optimal (scientific term :) ) for ~high dimensions if some of the dimensions don't actually matter. You could only beat it if you knew which dimensions didn't matter and removed them. Something along those lines.

Anyway, if we can't beat dummy_minimize then this becomes a very nice and compact module and for sure a publication ;)

@MechCoder
Copy link
Member

@betatim Yes, I think that should be the first step from now onwards before implementing new stuff.

  1. We should know if we are performing better than dummy search.
  2. We should know if any of the other new methods add any value.

@betatim
Copy link
Member

betatim commented Apr 6, 2016

Should we rename dummy -> random? In particular if it continues to be useful/competitive we might want to give it a less patronising name.

@glouppe glouppe deleted the more-tests branch April 21, 2016 11:37
holgern added a commit that referenced this pull request Feb 28, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants