Skip to content

Conversation

@VitalyFedyunin
Copy link
Contributor

@VitalyFedyunin VitalyFedyunin commented Oct 18, 2019

Rationale to add optional memory format to resize_as_ and resize_ operators is the fact that they are frequently used as inplace empty_like and empty operators inside of our code base. So having them to accept memory format similarly to empty_like and empty seems to be logical.

We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).


Stack from ghstack:

Allows to simplify patterns like:

  1. output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeCosizeHosizeW, 1, osizeW*sizeC, sizeC});

  2. output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
    indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
    output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
    indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

  3. gradInput.resize_as_(input);
    gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

Differential Revision: D18044978

Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

[ghstack-poisoned]
@VitalyFedyunin
Copy link
Contributor Author

CC @jjsjann123

Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

[ghstack-poisoned]
VitalyFedyunin added a commit that referenced this pull request Oct 18, 2019
Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

ghstack-source-id: de38e28
Pull Request resolved: #28292
Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

[ghstack-poisoned]
VitalyFedyunin added a commit that referenced this pull request Oct 21, 2019
Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

ghstack-source-id: f373cbe
Pull Request resolved: #28292
…op."

Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

[ghstack-poisoned]
VitalyFedyunin added a commit that referenced this pull request Oct 21, 2019
Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

ghstack-source-id: af48d69
Pull Request resolved: #28292
Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)

[ghstack-poisoned]
VitalyFedyunin added a commit that referenced this pull request Oct 22, 2019
Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

ghstack-source-id: c548695
Pull Request resolved: #28292
Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)

[ghstack-poisoned]
VitalyFedyunin added a commit that referenced this pull request Oct 23, 2019
Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

ghstack-source-id: ebad22f
Pull Request resolved: #28292
…to the `resize_` op."

Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)

[ghstack-poisoned]
…"Add memory format support to the `resize_` op."

Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)

[ghstack-poisoned]
Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)

[ghstack-poisoned]
…op."

Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.

We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).

--- 


Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)

[ghstack-poisoned]
…esize_` op."

Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.

We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).

--- 


Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)

[ghstack-poisoned]
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.

We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).

--- 


Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)

[ghstack-poisoned]
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.

We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).

--- 


Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)

[ghstack-poisoned]
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.

We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).

--- 


Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)

[ghstack-poisoned]
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.

We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).

--- 


Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)

[ghstack-poisoned]
…size_` op."

Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.

We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).

--- 


Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)

[ghstack-poisoned]
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.

We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).

--- 


Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)

[ghstack-poisoned]
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.

We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).

--- 


Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)

[ghstack-poisoned]
@VitalyFedyunin
Copy link
Contributor Author

CC @gchanan similar to resize_as_ PR.

self_->maybe_zero_dim(size.size() == 0);
if (optional_memory_format.has_value()) {
auto memory_format =
optional_memory_format.value_or(MemoryFormat::Contiguous);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you already know the tensor has a value, you don't need to use value_or.

"). This may be caused by passing a named tensor ",
"as an `out=` argument; please ensure that the sizes are the same. ");
TORCH_CHECK(
optional_memory_format.value_or(MemoryFormat::Contiguous) ==
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this just checking if optional_memory_format has a value?

The behavior is we only restride if there's a value, and here we don't restride.

self_->maybe_zero_dim(size.size() == 0);
if (optional_memory_format.has_value()) {
auto memory_format =
optional_memory_format.value_or(MemoryFormat::Contiguous);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as CPU, this should just use value.

Tensor& self,
IntArrayRef size,
c10::optional<MemoryFormat> optional_memory_format) {
auto memory_format =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as the named tensor case, shouldn't this check for not having a value?

Because this behavior differs from non-quantized tensor, where we will actually make something contiguous even if the size matches.

Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.

We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).

--- 


Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)

[ghstack-poisoned]
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.

We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).

--- 


Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)

[ghstack-poisoned]
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.

We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).

--- 


Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)

[ghstack-poisoned]
Rationale to add optional memory format to `resize_as_` and `resize_` operators is the fact that they are frequently used as inplace `empty_like` and `empty` operators inside of our code base. So having them to accept memory format similarly to `empty_like` and `empty` seems to be logical.

We could also add two new operators, but it will be more confusing (taking into account that renaming existing operators in not an option).

--- 


Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

Differential Revision: [D18044978](https://our.internmc.facebook.com/intern/diff/D18044978)

[ghstack-poisoned]
IntArrayRef size,
c10::optional<MemoryFormat> optional_memory_format) {
TORCH_CHECK(
!optional_memory_format.has_value(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are you sure we need this? quantized tensors should just be able to resized in the same way.

Copy link
Contributor

@gchanan gchanan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm okay with this but you should get someone from quantization to test/okay the quantization parts.

@facebook-github-bot
Copy link
Contributor

@VitalyFedyunin merged this pull request in cb43170.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Nov 18, 2019
Summary:
Pull Request resolved: pytorch/pytorch#28292

Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

Test Plan: Imported from OSS

Differential Revision: D18044978

Pulled By: VitalyFedyunin

fbshipit-source-id: bbf67c25f9cf88bc6e949089a3b247df50f86dc4
@facebook-github-bot facebook-github-bot deleted the gh/VitalyFedyunin/18/head branch November 21, 2019 15:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants