Skip to content

Conversation

@peterbell10
Copy link
Collaborator

Fixes #26333

Fixes the operators missed in #26507 and includes a test for all operators.

@peterbell10 peterbell10 requested a review from ezyang October 5, 2019 14:44
@pytorchbot pytorchbot added module: internals Related to internal abstractions in c10 and ATen module: operators module: pybind Related to our Python bindings / interactions with other Python libraries labels Oct 5, 2019
torch/tensor.py Outdated
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you remind me why we don't implement the wrapping here directly in C++?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't make sense to return NotImplemented from Tensor.eq which is what this is wrapping.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @mruberry (any thoughts here?)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is well-written and makes good use of the existing test infrastructure. It may be test overkill to add hundreds of tests for this behavior, but as long as these take <1s to run it's fine. (You can check the timestamps in the CI.)

Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. I'll leave a little time for mruberry to take a look and then merge.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not "TestDevicePrecision" per below. You should acquire the class name from cls.

@ezyang
Copy link
Contributor

ezyang commented Oct 7, 2019

I strongly suspect the onnx test failures are real. Probbabbbly just accepting the changes should be fine

@peterbell10
Copy link
Collaborator Author

I'm not familiar with ONNX, does it depend on the parameter names in the function signature? If that's the case then could this error be caused by _wrap_type_error_to_not_implemented giving a function that takes *args?

Oct 07 20:00:24 E   AssertionError: 'ir_version: 4\nproducer_name: "pytorch"\nproducer_version: "1.3"\ngraph {\n  no [truncated]... != u'ir_version: 4\nproducer_name: "pytorch"\nproducer_version: "1.3"\ngraph {\n  n [truncated]...
Oct 07 20:00:24 E     ir_version: 4
Oct 07 20:00:24 E     producer_name: "pytorch"
Oct 07 20:00:24 E     producer_version: "1.3"
Oct 07 20:00:24 E     graph {
Oct 07 20:00:24 E       node {
Oct 07 20:00:24 E   -     input: "0"
Oct 07 20:00:24 E   ?             ^
Oct 07 20:00:24 E   +     input: "x"
Oct 07 20:00:24 E   ?             ^
Oct 07 20:00:24 E   -     input: "1"
Oct 07 20:00:24 E   ?             ^
Oct 07 20:00:24 E   +     input: "y"
Oct 07 20:00:24 E   ?             ^

@mruberry
Copy link
Collaborator

mruberry commented Oct 8, 2019

@houseroad for ONNX question

@ezyang
Copy link
Contributor

ezyang commented Oct 9, 2019

@peterbell10 Yes, I don't exactly remember how argument name inference is done, but it certainly seems plausible that decorator is thwarting (or un-thwarting, perhaps) the name inference. The names are all advisory anyway; if the graphs are alpha-equivalent either is fine.

@pytorchbot pytorchbot added the module: onnx Related to torch.onnx label Oct 9, 2019
@peterbell10
Copy link
Collaborator Author

The names are all advisory anyway; if the graphs are alpha-equivalent either is fine.

Okay, in that case I've just updated the test cases.

@ezyang
Copy link
Contributor

ezyang commented Oct 9, 2019

@pytorchbot rebase this please

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@ezyang
Copy link
Contributor

ezyang commented Oct 17, 2019

Sorry, it looks like this merge conflicts now :(

@ezyang
Copy link
Contributor

ezyang commented Oct 22, 2019

Still failing tests

Oct 17 17:05:59 ____________________________ TestOperators.test_ne _____________________________
Oct 17 17:05:59 
Oct 17 17:05:59 self = <test_operators.TestOperators testMethod=test_ne>
Oct 17 17:05:59 
Oct 17 17:05:59     def test_ne(self):
Oct 17 17:05:59         x = torch.randn(1, 2, 3, 1, requires_grad=False).int()
Oct 17 17:05:59         y = torch.randn(1, 4, requires_grad=False).int()
Oct 17 17:05:59 >       self.assertONNX(lambda x, y: torch.ne(x, y), (x, y))
Oct 17 17:05:59 
Oct 17 17:05:59 test/onnx/test_operators.py:633: 
Oct 17 17:05:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
Oct 17 17:05:59 test/onnx/test_operators.py:67: in assertONNX
Oct 17 17:05:59     self.assertExpected(onnx_model_pbtxt, subname)
Oct 17 17:05:59 test/common_utils.py:944: in assertExpected
Oct 17 17:05:59     self.assertMultiLineEqual(expected, s)
Oct 17 17:05:59 E   AssertionError: 'ir_version: 4\nproducer_name: "pytorch"\nproducer_version: "1.3"\ngraph {\n  no [truncated]... != u'ir_version: 4\nproducer_name: "pytorch"\nproducer_version: "1.3"\ngraph {\n  n [truncated]...
Oct 17 17:05:59 E     ir_version: 4
Oct 17 17:05:59 E     producer_name: "pytorch"
Oct 17 17:05:59 E     producer_version: "1.3"
Oct 17 17:05:59 E     graph {
Oct 17 17:05:59 E       node {
Oct 17 17:05:59 E   -     input: "x"
Oct 17 17:05:59 E   ?             ^
Oct 17 17:05:59 E   +     input: "0"
Oct 17 17:05:59 E   ?             ^
Oct 17 17:05:59 E   -     input: "y"
Oct 17 17:05:59 E   ?             ^
Oct 17 17:05:59 E   +     input: "1"
Oct 17 17:05:59 E   ?             ^
Oct 17 17:05:59 E         output: "2"
Oct 17 17:05:59 E         op_type: "Equal"
Oct 17 17:05:59 E       }
Oct 17 17:05:59 E       node {
Oct 17 17:05:59 E         input: "2"
Oct 17 17:05:59 E         output: "3"
Oct 17 17:05:59 E         op_type: "Not"
Oct 17 17:05:59 E       }
Oct 17 17:05:59 E       name: "torch-jit-export"
Oct 17 17:05:59 E       input {
Oct 17 17:05:59 E   -     name: "x"
Oct 17 17:05:59 E   ?            ^
Oct 17 17:05:59 E   +     name: "0"
Oct 17 17:05:59 E   ?            ^
Oct 17 17:05:59 E         type {
Oct 17 17:05:59 E           tensor_type {
Oct 17 17:05:59 E             elem_type: 6
Oct 17 17:05:59 E             shape {
Oct 17 17:05:59 E               dim {
Oct 17 17:05:59 E                 dim_value: 1
Oct 17 17:05:59 E               }
Oct 17 17:05:59 E               dim {
Oct 17 17:05:59 E                 dim_value: 2
Oct 17 17:05:59 E               }
Oct 17 17:05:59 E               dim {
Oct 17 17:05:59 E                 dim_value: 3
Oct 17 17:05:59 E               }
Oct 17 17:05:59 E               dim {
Oct 17 17:05:59 E                 dim_value: 1
Oct 17 17:05:59 E               }
Oct 17 17:05:59 E             }
Oct 17 17:05:59 E           }
Oct 17 17:05:59 E         }
Oct 17 17:05:59 E       }
Oct 17 17:05:59 E       input {
Oct 17 17:05:59 E   -     name: "y"
Oct 17 17:05:59 E   ?            ^
Oct 17 17:05:59 E   +     name: "1"
Oct 17 17:05:59 E   ?            ^
Oct 17 17:05:59 E         type {
Oct 17 17:05:59 E           tensor_type {
Oct 17 17:05:59 E             elem_type: 6
Oct 17 17:05:59 E             shape {
Oct 17 17:05:59 E               dim {
Oct 17 17:05:59 E                 dim_value: 1
Oct 17 17:05:59 E               }
Oct 17 17:05:59 E               dim {
Oct 17 17:05:59 E                 dim_value: 4
Oct 17 17:05:59 E               }
Oct 17 17:05:59 E             }
Oct 17 17:05:59 E           }
Oct 17 17:05:59 E         }
Oct 17 17:05:59 E       }
Oct 17 17:05:59 E       output {
Oct 17 17:05:59 E         name: "3"
Oct 17 17:05:59 E         type {
Oct 17 17:05:59 E           tensor_type {
Oct 17 17:05:59 E             elem_type: 9
Oct 17 17:05:59 E             shape {
Oct 17 17:05:59 E               dim {
Oct 17 17:05:59 E                 dim_value: 1
Oct 17 17:05:59 E               }
Oct 17 17:05:59 E               dim {
Oct 17 17:05:59 E                 dim_value: 2
Oct 17 17:05:59 E               }
Oct 17 17:05:59 E               dim {
Oct 17 17:06:00 E                 dim_value: 3
Oct 17 17:06:00 E               }
Oct 17 17:06:00 E               dim {
Oct 17 17:06:00 E                 dim_value: 4
Oct 17 17:06:00 E               }
Oct 17 17:06:00 E             }
Oct 17 17:06:00 E           }
Oct 17 17:06:00 E         }
Oct 17 17:06:00 E       }
Oct 17 17:06:00 E     }
Oct 17 17:06:00 E     opset_import {
Oct 17 17:06:00 E       version: 9
Oct 17 17:06:00 E     }
Oct 17 17:06:00 =============================== warnings summary ===============================

@peterbell10
Copy link
Collaborator Author

Should be fixed now.

@ezyang
Copy link
Contributor

ezyang commented Oct 23, 2019

@pytorchbot rebase this please

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@ezyang merged this pull request in f33813d.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: internals Related to internal abstractions in c10 and ATen module: onnx Related to torch.onnx module: pybind Related to our Python bindings / interactions with other Python libraries open source

Projects

None yet

Development

Successfully merging this pull request may close these issues.

torch.Tensor reverse operators (__rmul__) should return NotImplemented for unsupported types

5 participants