Skip to content

Conversation

@t-vi
Copy link
Collaborator

@t-vi t-vi commented Jul 20, 2018

Allows mulitplication of e.g. numpy.float32 with tensors.

This came up with #9468

If you want this and after the other patch is done, I'll add tests (but that would be conflicting, so I prefer to wait).

Allows mulitplication of e.g. numpy.float32 with tensors.
This came up with pytorch#9468
@soumith
Copy link
Contributor

soumith commented Jul 21, 2018

@pytorchbot retest this please

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ssnl has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@gchanan
Copy link
Contributor

gchanan commented Jul 22, 2018

looks good! Maybe add a couple of tests?

@t-vi
Copy link
Collaborator Author

t-vi commented Jul 22, 2018

I would want to test this by adding numpy float32 scalars to the test introduced in #9651, so my original plan was to revise this PR after that to avoid the merge conflict.

@ssnl
Copy link
Collaborator

ssnl commented Jul 23, 2018

#9651 is BC breaking so this one's probably easier to review. Do you want to add some tests for this first? You can always change the test in #9651 later.

If this is accepted soon, I can cherry-pick this in time into 0.4.1. But I don't think #9651 can make it.

for np_dtype in [np.float32, np.float64, np.int32, np.int64]:
np_sc = np_dtype(2.0)
t = torch.ones(2, requires_grad=True)
r1 = t * np_sc

This comment was marked as off-topic.

@t-vi
Copy link
Collaborator Author

t-vi commented Jul 27, 2018

@ssnl I added a loop over float/double as input tensor and assert that output is the same dtype. Thanks for the suggestion to check result dtype.
The macos build failure is a network timeout or so, so it doesn't seem to be related to my patch.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SsnL has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@zou3519 zou3519 added the ready for review (this tag is deprecated) All PRs are ready for review unless they are draft, WIP, or have undismissed requested changes label Jul 31, 2018
@weiyangfb
Copy link
Contributor

@t-vi needs a rebase now :( Is this ready for merging now? @ssnl @gchanan

@yf225
Copy link
Contributor

yf225 commented Aug 21, 2018

@t-vi We probably need to rebase this PR onto current master

@t-vi
Copy link
Collaborator Author

t-vi commented Aug 21, 2018 via email

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

weiyangfb has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@weiyangfb
Copy link
Contributor

is this ready to go? @gchanan @ssnl

@ssnl
Copy link
Collaborator

ssnl commented Aug 29, 2018

LGTM. @gchanan any other thoughts?

r2 = t * np_sc
self.assertIsInstance(r2, torch.Tensor)
self.assertTrue(r2.requires_grad)
for np_dtype in [np.float32, np.float64, np.int32, np.int64]:

This comment was marked as off-topic.

inline bool THPUtils_checkDouble(PyObject* obj) {
#if PY_MAJOR_VERSION == 2
return PyFloat_Check(obj) || PyLong_Check(obj) || PyInt_Check(obj);
return PyFloat_Check(obj) || PyLong_Check(obj) || PyInt_Check(obj) || torch::utils::is_numpy_scalar(obj);

This comment was marked as off-topic.

return PyFloat_Check(obj) || PyLong_Check(obj) || PyInt_Check(obj) || torch::utils::is_numpy_scalar(obj);
#else
return PyFloat_Check(obj) || PyLong_Check(obj);
return PyFloat_Check(obj) || PyLong_Check(obj) || torch::utils::is_numpy_scalar(obj);

This comment was marked as off-topic.

This comment was marked as off-topic.

And expand range of tested numpy types.
Thank you, gchanan for the suggestions!
@t-vi
Copy link
Collaborator Author

t-vi commented Aug 31, 2018

Is the TestAdam.test_sparse_adam_output_grad caused by the PR? At first sight, I'm not entirely sure I see the relation to the changes.

@gchanan
Copy link
Contributor

gchanan commented Aug 31, 2018

@pytorchbot retest this please.

@gchanan
Copy link
Contributor

gchanan commented Aug 31, 2018

I don't think the test failures are relevant; other PRs failed with similar errors around the same time.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

weiyangfb has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@weiyangfb
Copy link
Contributor

Looks like comments are addressed, should we merge this? @gchanan

Copy link
Contributor

@gchanan gchanan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm.

petrex pushed a commit to petrex/pytorch that referenced this pull request Sep 5, 2018
resolve conflict in data parallel model
* master: (201 commits)
  Add cost inference to ConvGradient and WeightedSum operators (pytorch#10744)
  Move collapse dims into a single place (pytorch#11272)
  Fix some more warnings (pytorch#11257)
  Fix the batchnorm onnx exporting when affine=False
  Improve error message to include return types too (pytorch#11245)
  Check doxygen output in travis (pytorch#11124)
  Accept more numpy scalars as doubles (pytorch#9659)
  Fixed log message (pytorch#10874)
  Fix to distribution.__repr__ with lazy attributes (pytorch#11263)
  Add import export step to end to end tests
  Add complex hooks for out of tree complex implementation. (pytorch#11216)
  Unify opt flag for cmake codegen (pytorch#11227)
  nomnigraph - fix memory error in NN subgraph matchOp (pytorch#11127)
  Port PackedSequences functions to C++ (pytorch#11224)
  Treat numerical differences as warnings instead of errors when tracing (pytorch#11246)
  add a Float16UniformFill (pytorch#11123)
  Implement torch.tensordot (pytorch#10025)
  keep net type info when generating model complete net (pytorch#11032)
  Get rid of some uses of type() (pytorch#11215)
  Reorganize methods in Type, add CPUTypeDefault/CUDATypeDefault (pytorch#11205)
  ...
PenghuiCheng pushed a commit to PenghuiCheng/pytorch that referenced this pull request Sep 11, 2018
Summary:
Allows mulitplication of e.g. numpy.float32 with tensors.

This came up with pytorch#9468

If you want this and after the other patch is done, I'll add tests (but that would be conflicting, so I prefer to wait).
Pull Request resolved: pytorch#9659

Differential Revision: D8948078

Pulled By: weiyangfb

fbshipit-source-id: c7dcc57b63e2f100df837f70e1299395692f1a1b
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

open source ready for review (this tag is deprecated) All PRs are ready for review unless they are draft, WIP, or have undismissed requested changes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants