Skip to content
This repository was archived by the owner on Feb 28, 2024. It is now read-only.

Conversation

@jonathanng
Copy link

No description provided.

@pep8speaks
Copy link

pep8speaks commented Feb 1, 2018

Hello @jonathanng! Thanks for updating the PR.

Cheers ! There are no PEP8 issues in this Pull Request. 🍻

Comment last updated on February 01, 2018 at 17:38 Hours UTC

@jonathanng jonathanng changed the title added n_jobs subbort to optimizer added n_jobs support to optimizer Feb 1, 2018
@codecov-io
Copy link

Codecov Report

Merging #627 into master will increase coverage by <.01%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #627      +/-   ##
==========================================
+ Coverage    86.6%   86.61%   +<.01%     
==========================================
  Files          23       23              
  Lines        1777     1778       +1     
==========================================
+ Hits         1539     1540       +1     
  Misses        238      238
Impacted Files Coverage Δ
skopt/optimizer/optimizer.py 96.29% <100%> (-0.02%) ⬇️
skopt/utils.py 91.22% <100%> (+0.1%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 20e46a7...1c72154. Read the comment docs.

@jonathanng
Copy link
Author

I find that for large # of datapoints, GP is very slow. This pull request does not fix that. Anyone else seeing this issue?

@betatim
Copy link
Member

betatim commented Feb 4, 2018

I think the fact that GPs slow down as the number of data points increases is one of the drawbacks of GPs. I tried to find the relevant bit of the scikit-learn docs that explains the scaling behaviour but I couldn't find it.

@iaroslav-ai
Copy link
Member

Yup a normal thing for GP:

https://arxiv.org/pdf/1502.05700.pdf see Figure 1 there

@jonathanng maybe add this somewhere to the comments if you have time? Otherwise we could add it later

@iaroslav-ai
Copy link
Member

I still think having a control over n_jobs would be very beneficial, esp. if you run multiple optimizations at the same time in order to test for example the algorithm

def __init__(self, dimensions, base_estimator="gp",
n_random_starts=None, n_initial_points=10,
acq_func="gp_hedge",
n_jobs=1, acq_func="gp_hedge",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add description of n_jobs to docstrings.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

@iaroslav-ai
Copy link
Member

Beyond the n_jobs docs thingy LGTM! Please add [MRG] to the beginning of the name of the PR if you are ready to merge, or [WIP] if you wish to work a bit more on this.

@glouppe glouppe changed the title added n_jobs support to optimizer [MRG] added n_jobs support to optimizer Mar 15, 2018
@holgern holgern closed this Feb 26, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants