[MRG + 2] FIX precision to float64 across the codebase#5375
[MRG + 2] FIX precision to float64 across the codebase#5375glouppe merged 1 commit intoscikit-learn:masterfrom
Conversation
7a6f80c to
37794ea
Compare
|
Did you just replace all float32 with float64? Some parts of sklearn work in float64, right? Or not any more? The trees do and the SVM does IIRC. So in some places we want float32. |
|
I've changed
Are these the proper things to do?? |
37794ea to
461bf92
Compare
There was a problem hiding this comment.
tests that checks for np.float32 should be left unchanged.
No this is a wrong assumption. 32 bit is less precise than 64 bit and some our functions and class methods might use the dtype of the input to select the precision level of their internal datastructures. Please leave the |
|
Thanks for clarifying that!! I'll revert in a minute :) |
|
Note that as far as I know |
31d1de0 to
1cc1dee
Compare
|
@ogrisel Have reverted all |
1cc1dee to
bb5c10c
Compare
There was a problem hiding this comment.
wait so the type is the same but the string representation is different? that is somewhat confusing, isn't it?
There was a problem hiding this comment.
As far as my understanding goes... numpy.float is alias of python's default float while numpy.float64 is a scalar type defined by numpy...
(This answer seems to explain it... (especially the comment below it!))
|
lgtm |
|
@ogrisel could you take a look? :) |
bb5c10c to
a870112
Compare
|
Looks good to me too. +1 for merge. |
[MRG + 2] FIX precision to float64 across the codebase
|
@glouppe Thanks :) |
Implementing #5356 (comment)
@ogrisel @giorgiop Does this look okay?