-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Add memory format support to the resize_ op.
#28292
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add memory format support to the resize_ op.
#28292
Conversation
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
[ghstack-poisoned]
|
CC @jjsjann123 |
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
[ghstack-poisoned]
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
ghstack-source-id: de38e28
Pull Request resolved: #28292
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
[ghstack-poisoned]
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
ghstack-source-id: f373cbe
Pull Request resolved: #28292
…op."
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
[ghstack-poisoned]
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
ghstack-source-id: af48d69
Pull Request resolved: #28292
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)
[ghstack-poisoned]
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
ghstack-source-id: c548695
Pull Request resolved: #28292
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)
[ghstack-poisoned]
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
ghstack-source-id: ebad22f
Pull Request resolved: #28292
…to the `resize_` op."
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)
[ghstack-poisoned]
…"Add memory format support to the `resize_` op."
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)
[ghstack-poisoned]
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)
[ghstack-poisoned]
…op."
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.
We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).
---
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)
[ghstack-poisoned]
…esize_` op."
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.
We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).
---
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)
[ghstack-poisoned]
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.
We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).
---
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)
[ghstack-poisoned]
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.
We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).
---
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)
[ghstack-poisoned]
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.
We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).
---
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)
[ghstack-poisoned]
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.
We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).
---
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)
[ghstack-poisoned]
…size_` op."
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.
We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).
---
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)
[ghstack-poisoned]
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.
We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).
---
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)
[ghstack-poisoned]
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.
We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).
---
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)
[ghstack-poisoned]
|
CC @gchanan similar to |
aten/src/ATen/native/Resize.cpp
Outdated
| self_->maybe_zero_dim(size.size() == 0); | ||
| if (optional_memory_format.has_value()) { | ||
| auto memory_format = | ||
| optional_memory_format.value_or(MemoryFormat::Contiguous); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you already know the tensor has a value, you don't need to use value_or.
aten/src/ATen/native/ResizeCommon.h
Outdated
| "). This may be caused by passing a named tensor ", | ||
| "as an `out=` argument; please ensure that the sizes are the same. "); | ||
| TORCH_CHECK( | ||
| optional_memory_format.value_or(MemoryFormat::Contiguous) == |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't this just checking if optional_memory_format has a value?
The behavior is we only restride if there's a value, and here we don't restride.
aten/src/ATen/native/cuda/Resize.cu
Outdated
| self_->maybe_zero_dim(size.size() == 0); | ||
| if (optional_memory_format.has_value()) { | ||
| auto memory_format = | ||
| optional_memory_format.value_or(MemoryFormat::Contiguous); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as CPU, this should just use value.
| Tensor& self, | ||
| IntArrayRef size, | ||
| c10::optional<MemoryFormat> optional_memory_format) { | ||
| auto memory_format = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as the named tensor case, shouldn't this check for not having a value?
Because this behavior differs from non-quantized tensor, where we will actually make something contiguous even if the size matches.
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.
We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).
---
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)
[ghstack-poisoned]
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.
We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).
---
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)
[ghstack-poisoned]
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.
We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).
---
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)
[ghstack-poisoned]
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.
We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).
---
Allows to simplify patterns like:
1.
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});
2.
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
3.
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)
[ghstack-poisoned]
| IntArrayRef size, | ||
| c10::optional<MemoryFormat> optional_memory_format) { | ||
| TORCH_CHECK( | ||
| !optional_memory_format.has_value(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are you sure we need this? quantized tensors should just be able to resized in the same way.
gchanan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm okay with this but you should get someone from quantization to test/okay the quantization parts.
|
@VitalyFedyunin merged this pull request in cb43170. |
Summary: Pull Request resolved: pytorch/pytorch#28292 Allows to simplify patterns like: 1. output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC}); 2. output.resize_({nbatch, nInputPlane, outputHeight, outputWidth}); indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth}); output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format); indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format); 3. gradInput.resize_as_(input); gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format); Test Plan: Imported from OSS Differential Revision: D18044978 Pulled By: VitalyFedyunin fbshipit-source-id: bbf67c25f9cf88bc6e949089a3b247df50f86dc4
Rationale to add optional memory format to
resize_as_andresize_operators is the fact that they are frequently used as inplaceempty_likeandemptyoperators inside of our code base. So having them to accept memory format similarly toempty_likeandemptyseems to be logical.We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).
Stack from ghstack:
resize_op. #28292 Add memory format support to theresize_op.operator==of TensorOptions as confusing one #28076 Killoperator==of TensorOptions as confusing oneresize_as_operator #27979 Add memory format support toresize_as_operatorAllows to simplify patterns like:
output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeCosizeHosizeW, 1, osizeW*sizeC, sizeC});
output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
gradInput.resize_as_(input);
gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
Differential Revision: D18044978