Skip to content

Conversation

@ezyang
Copy link
Contributor

@ezyang ezyang commented Oct 18, 2019

Stack from ghstack:

This PR eliminates the static distinction between
Tensor and Variable. Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.

To do this, I need Tensor to have API parity with Variable. I have already
moved most of the methods I don't want in Tensor off Variable.
These implementations are all placed in Tensor.cpp.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

This diff is BC breaking in a few ways:

  • Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for
    torch::autograd functions no longer works, you have to explicitly qualify
    them with torch::autograd (examples: torch/nn/parallel/data_parallel.h)
  • Because Variable and Tensor are now the same type, code which assumes that
    they are different types (e.g., for the purposes of templating, or enable_if checks)
    will not work until you delete the (now) redundant overload/specialization.
    (examples: torch/nn/modules/container/any.h, torch/csrc/utils/pybind.h)

Some other notes:

  • I'm not sure what was going with the old template implementation of extract_vars,
    but I couldn't get the sfinae version to work. Replacing it with an overloading based version
    made it work.

Signed-off-by: Edward Z. Yang [email protected]

Differential Revision: D18571426

This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <[email protected]>

[ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <[email protected]>

[ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <[email protected]>

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Oct 18, 2019
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <[email protected]>

ghstack-source-id: 699c091
Pull Request resolved: #28287
return Tensor(self_impl_copy);
}

/// NOTE: `var.variable_data()` in C++ has the same semantics as `tensor.data`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does that mean that var.variable_data() is the same as var.detach()?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @yf225 this is just preexisting

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the only difference between var.variable_data() (aka. tensor.data in Python) and var.detach() (aka. tensor.detach() in Python) is that the former doesn't share version counter, but the latter does.

This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <[email protected]>

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Oct 18, 2019
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <[email protected]>

ghstack-source-id: f8c9827
Pull Request resolved: #28287
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <[email protected]>

[ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <[email protected]>

[ghstack-poisoned]
@ezyang ezyang added the module: bc-breaking Related to a BC-breaking change label Oct 21, 2019
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

This diff is BC breaking in a few ways:
- Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for
  `torch::autograd` functions no longer works, you have to explicitly qualify
  them with `torch::autograd`
- Because Variable and Tensor are now the same type, code which assumes that
  they are different types (e.g., for the purposes of templating, or enable_if checks)
  will not work until you delete the (now) redundant overload/specialization.

Signed-off-by: Edward Z. Yang <[email protected]>

[ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

This diff is BC breaking in a few ways:
- Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for
  `torch::autograd` functions no longer works, you have to explicitly qualify
  them with `torch::autograd`
- Because Variable and Tensor are now the same type, code which assumes that
  they are different types (e.g., for the purposes of templating, or enable_if checks)
  will not work until you delete the (now) redundant overload/specialization.

Signed-off-by: Edward Z. Yang <[email protected]>

[ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

This diff is BC breaking in a few ways:
- Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for
  `torch::autograd` functions no longer works, you have to explicitly qualify
  them with `torch::autograd`
- Because Variable and Tensor are now the same type, code which assumes that
  they are different types (e.g., for the purposes of templating, or enable_if checks)
  will not work until you delete the (now) redundant overload/specialization.

Signed-off-by: Edward Z. Yang <[email protected]>

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Oct 21, 2019
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <[email protected]>

ghstack-source-id: 7ea0c4c
Pull Request resolved: #28287
@ezyang ezyang requested review from colesbury and gchanan October 21, 2019 14:59
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

This diff is BC breaking in a few ways:
- Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for
  `torch::autograd` functions no longer works, you have to explicitly qualify
  them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`)
- Because Variable and Tensor are now the same type, code which assumes that
  they are different types (e.g., for the purposes of templating, or enable_if checks)
  will not work until you delete the (now) redundant overload/specialization.
  (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`)

Some other notes:
- I'm not sure what was going with the old template implementation of `extract_vars`,
  but I couldn't get the sfinae version to work. Replacing it with an overloading based version
  made it work.

Signed-off-by: Edward Z. Yang <[email protected]>

[ghstack-poisoned]
@ezyang
Copy link
Contributor Author

ezyang commented Nov 14, 2019

This diff is now rebased past my other changes!

@kostmo
Copy link
Member

kostmo commented Nov 14, 2019

CircleCI build failures summary

As of commit 658d692:

  • 5/5 failures introduced in this PR
  • 0/5 recognized as flaky

Here are the reasons each build failed:

Job Step Log excerpt
pytorch_linux_xenial_py3_6_gcc5_4_build Build /var/lib/jenkins/workspace/build/aten/src/ATen/core/TensorMethods.h:6346:1: error:
pytorch_linux_xenial_py2_7_9_build Build /var/lib/jenkins/workspace/build/aten/src/ATen/core/TensorMethods.h:6346:1: error:
pytorch_linux_xenial_cuda9_cudnn7_py3_build Build /var/lib/jenkins/workspace/build/aten/src/ATen/core/TensorMethods.h:6346:1: error:
pytorch_libtorch_linux_xenial_cuda9_cudnn7_py3_build Build /var/lib/jenkins/cpp-build/caffe2/build/aten/src/ATen/core/TensorMethods.h:6346:1: error:
pytorch_linux_xenial_py3_clang5_asan_test Test RuntimeError:

This comment was automatically generated by Dr. CI.
Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

This comment has been revised 7 time(s).

This PR eliminates the static distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.

To do this, I need Tensor to have API parity with Variable. I have already
moved most of the methods I don't want in Tensor off Variable.
These implementations are all placed in Tensor.cpp.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

This diff is BC breaking in a few ways:
- Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for
  `torch::autograd` functions no longer works, you have to explicitly qualify
  them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`)
- Because Variable and Tensor are now the same type, code which assumes that
  they are different types (e.g., for the purposes of templating, or enable_if checks)
  will not work until you delete the (now) redundant overload/specialization.
  (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`)

Some other notes:
- I'm not sure what was going with the old template implementation of `extract_vars`,
  but I couldn't get the sfinae version to work. Replacing it with an overloading based version
  made it work.

Signed-off-by: Edward Z. Yang <[email protected]>

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Nov 14, 2019
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <[email protected]>

ghstack-source-id: 0b361ad
Pull Request resolved: #28287
This PR eliminates the static distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.

To do this, I need Tensor to have API parity with Variable. I have already
moved most of the methods I don't want in Tensor off Variable.
These implementations are all placed in Tensor.cpp.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

This diff is BC breaking in a few ways:
- Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for
  `torch::autograd` functions no longer works, you have to explicitly qualify
  them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`)
- Because Variable and Tensor are now the same type, code which assumes that
  they are different types (e.g., for the purposes of templating, or enable_if checks)
  will not work until you delete the (now) redundant overload/specialization.
  (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`)

Some other notes:
- I'm not sure what was going with the old template implementation of `extract_vars`,
  but I couldn't get the sfinae version to work. Replacing it with an overloading based version
  made it work.

Signed-off-by: Edward Z. Yang <[email protected]>

[ghstack-poisoned]
This PR eliminates the static distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.

To do this, I need Tensor to have API parity with Variable. I have already
moved most of the methods I don't want in Tensor off Variable.
These implementations are all placed in Tensor.cpp.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

This diff is BC breaking in a few ways:
- Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for
  `torch::autograd` functions no longer works, you have to explicitly qualify
  them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`)
- Because Variable and Tensor are now the same type, code which assumes that
  they are different types (e.g., for the purposes of templating, or enable_if checks)
  will not work until you delete the (now) redundant overload/specialization.
  (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`)

Some other notes:
- I'm not sure what was going with the old template implementation of `extract_vars`,
  but I couldn't get the sfinae version to work. Replacing it with an overloading based version
  made it work.

Signed-off-by: Edward Z. Yang <[email protected]>

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Nov 15, 2019
…t on Tensor."


Some previous implementations are defined in native_functions.yaml.
In this case, I don't define them explicitly in Tensor; instead
they are placed in VariableTypeManual.cpp. When I did this, I would
have deleted documentation; instead, this documentation was moved
to native_functions.yaml

This also replaces `current_version` with just `_version`.

This is a carved out portion of #28287, rebased past Tensor-Variable
merge.

Signed-off-by: Edward Z. Yang <[email protected]>

Differential Revision: [D18504934](https://our.internmc.facebook.com/intern/diff/D18504934)

[ghstack-poisoned]
facebook-github-bot pushed a commit that referenced this pull request Nov 18, 2019
#29667)

Summary:
Pull Request resolved: #29667

Some previous implementations are defined in native_functions.yaml.
In this case, I don't define them explicitly in Tensor; instead
they are placed in VariableTypeManual.cpp. When I did this, I would
have deleted documentation; instead, this documentation was moved
to native_functions.yaml

This also replaces `current_version` with just `_version`.

This is a carved out portion of #28287, rebased past Tensor-Variable
merge.

Signed-off-by: Edward Z. Yang <[email protected]>

Test Plan: Imported from OSS

Differential Revision: D18504934

Pulled By: ezyang

fbshipit-source-id: be7adf45b637daffe2b0b1631eb31d967525fc31
ezyang added a commit to ezyang/pytorch that referenced this pull request Nov 19, 2019
Some previous implementations are defined in native_functions.yaml.
In this case, I don't define them explicitly in Tensor; instead
they are placed in VariableTypeManual.cpp. When I did this, I would
have deleted documentation; instead, this documentation was moved
to native_functions.yaml

This is a carved out portion of pytorch#28287, rebased past Tensor-Variable
merge.

Signed-off-by: Edward Z. Yang <[email protected]>

ghstack-source-id: 0d2141e
Pull Request resolved: pytorch#29667
ezyang added a commit to ezyang/pytorch that referenced this pull request Nov 19, 2019
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <[email protected]>

ghstack-source-id: f17eaa6
Pull Request resolved: pytorch#28287
This PR eliminates the static distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.

To do this, I need Tensor to have API parity with Variable. I have already
moved most of the methods I don't want in Tensor off Variable.
These implementations are all placed in Tensor.cpp.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

This diff is BC breaking in a few ways:
- Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for
  `torch::autograd` functions no longer works, you have to explicitly qualify
  them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`)
- Because Variable and Tensor are now the same type, code which assumes that
  they are different types (e.g., for the purposes of templating, or enable_if checks)
  will not work until you delete the (now) redundant overload/specialization.
  (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`)

Some other notes:
- I'm not sure what was going with the old template implementation of `extract_vars`,
  but I couldn't get the sfinae version to work. Replacing it with an overloading based version
  made it work.

Signed-off-by: Edward Z. Yang <[email protected]>

Differential Revision: [D18571426](https://our.internmc.facebook.com/intern/diff/D18571426)

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Nov 20, 2019
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <[email protected]>

ghstack-source-id: 0b43237
Pull Request resolved: #28287
This PR eliminates the static distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.

To do this, I need Tensor to have API parity with Variable. I have already
moved most of the methods I don't want in Tensor off Variable.
These implementations are all placed in Tensor.cpp.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

This diff is BC breaking in a few ways:
- Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for
  `torch::autograd` functions no longer works, you have to explicitly qualify
  them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`)
- Because Variable and Tensor are now the same type, code which assumes that
  they are different types (e.g., for the purposes of templating, or enable_if checks)
  will not work until you delete the (now) redundant overload/specialization.
  (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`)

Some other notes:
- I'm not sure what was going with the old template implementation of `extract_vars`,
  but I couldn't get the sfinae version to work. Replacing it with an overloading based version
  made it work.

Signed-off-by: Edward Z. Yang <[email protected]>

Differential Revision: [D18571426](https://our.internmc.facebook.com/intern/diff/D18571426)

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Nov 20, 2019
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <[email protected]>

ghstack-source-id: 79ab8d7
Pull Request resolved: #28287
This PR eliminates the static distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.

To do this, I need Tensor to have API parity with Variable. I have already
moved most of the methods I don't want in Tensor off Variable.
These implementations are all placed in Tensor.cpp.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

This diff is BC breaking in a few ways:
- Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for
  `torch::autograd` functions no longer works, you have to explicitly qualify
  them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`)
- Because Variable and Tensor are now the same type, code which assumes that
  they are different types (e.g., for the purposes of templating, or enable_if checks)
  will not work until you delete the (now) redundant overload/specialization.
  (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`)

Some other notes:
- I'm not sure what was going with the old template implementation of `extract_vars`,
  but I couldn't get the sfinae version to work. Replacing it with an overloading based version
  made it work.

Signed-off-by: Edward Z. Yang <[email protected]>

Differential Revision: [D18571426](https://our.internmc.facebook.com/intern/diff/D18571426)

[ghstack-poisoned]
This PR eliminates the static distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.

To do this, I need Tensor to have API parity with Variable. I have already
moved most of the methods I don't want in Tensor off Variable.
These implementations are all placed in Tensor.cpp.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

This diff is BC breaking in a few ways:
- Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for
  `torch::autograd` functions no longer works, you have to explicitly qualify
  them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`)
- Because Variable and Tensor are now the same type, code which assumes that
  they are different types (e.g., for the purposes of templating, or enable_if checks)
  will not work until you delete the (now) redundant overload/specialization.
  (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`)

Some other notes:
- I'm not sure what was going with the old template implementation of `extract_vars`,
  but I couldn't get the sfinae version to work. Replacing it with an overloading based version
  made it work.

Signed-off-by: Edward Z. Yang <[email protected]>

Differential Revision: [D18571426](https://our.internmc.facebook.com/intern/diff/D18571426)

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Nov 20, 2019
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <[email protected]>

ghstack-source-id: 67b7804
Pull Request resolved: #28287
@facebook-github-bot facebook-github-bot deleted the gh/ezyang/480/head branch December 10, 2019 15:18
xxtEchjovs44 pushed a commit to xxtEchjovs44/pytorch that referenced this pull request Jan 29, 2020
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <[email protected]>

ghstack-source-id: ebf5f22
Pull Request resolved: pytorch/pytorch#28287
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module: bc-breaking Related to a BC-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants