-
Notifications
You must be signed in to change notification settings - Fork 554
Add example to show random search vs gp #50
Conversation
|
🍒 picking :) To avoid the option of tweaking |
|
I thought about that as well. But finding the mean score at every iteration, kind of defeats the purpose right? Typically you have the budget to only run n iterations and not |
|
Maybe, I'll give it a try. |
|
Doing If we repeat the example 100 times with 100 different seeds and ~50 times GP wins and ~50 random wins the conclusion would be that there isn't much difference. If however in 90 cases GP wins, you'd conclude that GP is really much smarter. |
|
Indeed, I agree with you! |
|
Wondering if average score after each iteration is the best way to visualise this or if it would be better to show the distribution of the final (after 100 iterations) score for each method. |
|
I have changed the example. |
|
Thanks! However I think we should not mix the messages and illustrate only one concept at a time. What do you think of only showing how to use |
|
Removed the comparison with dummy_search and updated just to show how to use in combination with sklearn estimator. |
examples/plot_random_vs_gp.py
Outdated
| params = { | ||
| 'max_depth': [max_depth], 'max_features': [max_features], | ||
| 'min_samples_split': [mss], 'min_samples_leaf': [msl]} | ||
| gscv = GridSearchCV(rfc, params, n_jobs=-1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure why GridSearchCV is needed. cross_val_scores should be enough.
|
@glouppe Addressed! |
|
Merging! |


No description provided.