Skip to content

Conversation

@yf225
Copy link
Contributor

@yf225 yf225 commented May 24, 2019

As part of the Variable/Tensor merge, we want to de-virtualize TensorImpl as much as possible, to achieve performance gain for common Tensor functions such as numel() / sizes() / dim().

This supersedes #11259.

@yf225 yf225 requested a review from gchanan May 24, 2019 14:42
@pytorchbot pytorchbot added the module: internals Related to internal abstractions in c10 and ATen label May 24, 2019
@gchanan
Copy link
Contributor

gchanan commented May 24, 2019

benchmark?

Copy link
Contributor

@gchanan gchanan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there are a bunch of comments in TensorImpl.h about avoiding virtual dispatch for perf that aren't relevant anymore.

TensorImpl* TensorImpl::maybe_zero_dim(bool condition_when_zero_dim) {
// We only allow this operation on dense tensors
if (!(has_storage() && layout() == kStrided)) {
TORCH_CHECK(condition_when_zero_dim == (dim() == 0),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know you just moved this, but I think this should be an INTERAL_CHECK; this isn't a public API.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

int64_t size(int64_t d) const override;
int64_t stride(int64_t d) const override;
int64_t dim() const override;
bool has_storage() const override;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is has_storage still virtual?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am thinking whether the derived types of TensorImpl is responsible for telling whether it has storage. An alternative is to explicitly check whether the current tensor is sparse or MKLDNN tensor, and if so return false in has_storage().

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed it to explicitly check whether the current tensor is sparse or MKLDNN tensor, and if so return false in has_storage().

bool TensorImpl::has_storage() const {
return storage_;
TORCH_CHECK(type_id_ != UndefinedTensorId(), type_id_, " does not have has_storage");
return !is_sparse() && !is_mkldnn() && storage_;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't this just strided? Do we ever have strided tensors where "storage_" is false?


TensorImpl* TensorImpl::maybe_zero_dim(bool condition_when_zero_dim) {
// We only allow this operation on dense tensors
if (!(has_storage() && layout() == kStrided)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as before, is there a difference between has_storage and the layout being strided?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess my main question is -- can we just has_storage -> storage_ != NULL as an equivalence for kStrided? I don't know if that works with C2 tensors or not, though.

@pytorchbot
Copy link
Collaborator

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
Stale pull requests will automatically be closed 30 days after being marked Stale

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed module: internals Related to internal abstractions in c10 and ATen Stale

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants