-
-
Notifications
You must be signed in to change notification settings - Fork 26.5k
DOC use notebook-style for plot_faces_decomposition #22452
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
|
||
| for name, estimator, center in estimators: | ||
| print("Extracting the top %d %s..." % (n_components, name)) | ||
| print("- Extracting the top %d '%s'..." % (n_components, name)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I propose to break this for loop and instead create a cell where:
- we define the model
- and plot the gallery
It will have the benefit to remove the if/else for switching the behaviour depending of the estimator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea. Done in 43d4a5b.
| # ############################################################################# | ||
| # Various positivity constraints applied to dictionary learning. | ||
| estimators = [ | ||
| dict_estimators = [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you as well break the for loop into three different cells?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also done in 43d4a5b.
|
Due to the changed order of generated images, it was also necessary to change the links to the images in the decomposition documentation. |
| # subcomponents that are maximally independent. | ||
|
|
||
| # %% | ||
| ica_estimator = decomposition.FastICA(n_components=n_components, whiten=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can remove the FutureWarning.
| ica_estimator = decomposition.FastICA(n_components=n_components, whiten=True) | |
| ica_estimator = decomposition.FastICA( | |
| n_components=n_components, whiten='arbitrary-variance' | |
| ) |
Regarding the ConvergenceWarning, could you increase the number of iterations (or maybe increase the tolerance) such that we don't get this warning anymore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed. increased max_iter and tol
| # Sparse comp. - MiniBatchSparsePCA | ||
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # Sparse comp. - MiniBatchSparsePCA | |
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
| # Sparse components - MiniBatchSparsePCA | |
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
|
|
||
| # %% | ||
| # Independent components - FastICA | ||
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed all similar problems
|
|
||
| # %% | ||
| # Non-negative components - NMF | ||
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
|
||
| # %% | ||
| # Eigenfaces - PCA using randomized SVD | ||
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| # Dictionary learning - positive dictionary | ||
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # Dictionary learning - positive dictionary | |
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
| # Dictionary learning - positive dictionary | |
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| # Dictionary learning - positive code | ||
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # Dictionary learning - positive code | |
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
| # Dictionary learning - positive code | |
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| # Dictionary learning - positive dictionary & code | ||
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # Dictionary learning - positive dictionary & code | |
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
| # Dictionary learning - positive dictionary & code | |
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | ||
| # | ||
|
|
||
| # %% |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this section and the subsequent ones, it could be cool to add a small summary of what constraint we imposed in the model and how it translates into the components. No need to add long discussion but this is currently slightly rough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added a bit of a description to the dictionary learning cells. But there is a slight nuance that I didn't find a comprehensive explanation of code in the documentation. But as far as I understand it is coding coefficients from sparse coding. Added so for now.
| n_components=n_components, svd_solver="randomized", whiten=True | ||
| ) | ||
| pca_estimator.fit(faces_centered) | ||
| plot_gallery( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that we should probably plot a colorbar with the min/max values.
Without it, I find it complex to grasp that a lot of components are zeros in SparsePCA for instance. I could have the same comment with only positive components.
So the colorbar could solve those ambuiguities.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added colorbar, now plots look more informative.
Oh wow, good catch @Valavanca! This is quite easy to overlook and definitely something to keep in mind for other PRs on this issue. |
|
Thanks @Valavanca The example is much nicer now. |
Reference Issues/PRs
What does this implement/fix? Explain your changes.
Any other comments?