Skip to content

Conversation

@eellison
Copy link
Contributor

When we specialize the tensor type of constants in compilation it causes all sorts of problems.

Fix for #22809

@eellison eellison requested review from driazati and suo July 12, 2019 19:51
@pytorchbot pytorchbot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Jul 12, 2019
@eellison eellison changed the title fix overspecializing constants fix overspecializing constants in compilation Jul 12, 2019
@eellison eellison requested review from jamesr66a and zdevito July 12, 2019 20:23
}
Value* value = m.graph()->insertConstant(constants_[offset], nullptr, loc);

// specializing tensor type on compilation messes up typing relations
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What typing relations does it mess up? Shouldn't we fix where those types are checked instead of throwing away the shape info?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The shape info gets re-specialized again later anyway. It messes up things like in the example, where a list of complete tensor types does not subtype a list of tensor types, and you get error messages like:

  aten::cat(Tensor[] tensors, int dim=<default>) -> Tensor:
  Expected a value of type 'List[Tensor]' for argument 'tensors' but instead found type 'List[Tensor]'.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't seem like the right place to fix this. Afterall, this same bug will exist anywhere insertConstant is used. insertConstant would be better, with shape propagation introducing a shape for a constant tensor.

Copy link
Contributor Author

@eellison eellison Jul 19, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i don't think there's another pathway to create a specialized tensor type during typechecking. the shape analysis prim::Constant refinement already exists.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@eellison merged this pull request in f2f3e8a.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants