-
Notifications
You must be signed in to change notification settings - Fork 26.3k
BFloat16: add explicit dtype support for to_mkldnn and to_dense #37215
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit 1fdde52 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 73 times. |
…dense" [ghstack-poisoned]
…dense" [ghstack-poisoned]
…dense" [ghstack-poisoned]
…dense" [ghstack-poisoned]
albanD
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How general is this supposed to be? If there are restrictions on the dtype, a couple checks are missing I think.
Also for sparse Tensor, if the output dtype arg is not supported, and error should be raised if the user passes it.
| // NB: Dropped the resizeNd variants | ||
|
|
||
| Tensor sparse_to_dense(const SparseTensor& self) { | ||
| Tensor sparse_to_dense(const SparseTensor& self, c10::optional<ScalarType> dtype) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new flag is completely ignored for here. Is that expected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just for dispatch which should have same numbers of inputs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case please error out if dtype is provided, as we prefer not to have situations when user passing ignored argument.
| AT_ASSERTM(cpu_tensor.dim() <= 5, | ||
| "Can't convert cpu tensor with the number of dimensions > 5"); | ||
| Tensor dense_to_mkldnn(const Tensor& cpu_tensor, c10::optional<ScalarType> dtype) { | ||
| TORCH_INTERNAL_ASSERT(cpu_tensor.device().type() == DeviceType::CPU, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are these internal asserts? These args are not controled by the user? If so, you prefer a TORCH_CHECK here I think.
| ideep::tensor::data_type::f32, | ||
| (cpu_tensor_cont.template data_ptr<float>())); | ||
| if (cpu_tensor.scalar_type() == ScalarType::Float) { | ||
| dtensor.feed_from(dtensor.get_dims(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function will work for any dtype provided?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the default dtype is float, there can support othe dtype, such as int8.
| if (stensor.is_empty()) return cpu_tensor; | ||
| auto pub_tensor = stensor.to_public(cpu_tensor.template data_ptr<float>()); | ||
| auto pub_tensor = | ||
| cpu_tensor.scalar_type() == ScalarType::Float |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here the cpu_tensor type is dtype. So it can be other things that float/bfloat right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes.
…dense" [ghstack-poisoned]
…dense" [ghstack-poisoned]
…dense" [ghstack-poisoned]
…dense" [ghstack-poisoned]
…dense" [ghstack-poisoned]
…dense" [ghstack-poisoned]
…dense" [ghstack-poisoned]
…dense" [ghstack-poisoned]
…dense" [ghstack-poisoned]
…dense" Differential Revision: [D22440964](https://our.internmc.facebook.com/intern/diff/D22440964) [ghstack-poisoned]
…dense" Differential Revision: [D22440964](https://our.internmc.facebook.com/intern/diff/D22440964) [ghstack-poisoned]
ghstack-source-id: dc31a46 Pull Request resolved: pytorch#37215
VitalyFedyunin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Several error messaging fixes required, everything else looks good.
| // NB: Dropped the resizeNd variants | ||
|
|
||
| Tensor sparse_to_dense(const SparseTensor& self) { | ||
| Tensor sparse_to_dense(const SparseTensor& self, c10::optional<ScalarType> dtype) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case please error out if dtype is provided, as we prefer not to have situations when user passing ignored argument.
|
Please rebase entire stack to make it mergable. |
|
Hi @XiaobingSuper! Thank you for your pull request. We require contributors to sign our Contributor License Agreement, and yours needs attention. You currently have a record in our system, but we do not have a signature on file. In order for us to review and merge your code, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
Stack from ghstack:
Differential Revision: D22440964