Skip to content

Conversation

@XiaobingSuper
Copy link
Collaborator

@XiaobingSuper XiaobingSuper commented Apr 24, 2020

Stack from ghstack:

Differential Revision: D22440964

@dr-ci
Copy link

dr-ci bot commented Apr 24, 2020

💊 CI failures summary and remediations

As of commit 1fdde52 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 73 times.

@XiaobingSuper
Copy link
Collaborator Author

@jgong5, @hongzhen1

@XiaobingSuper XiaobingSuper requested a review from albanD April 26, 2020 02:19
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How general is this supposed to be? If there are restrictions on the dtype, a couple checks are missing I think.

Also for sparse Tensor, if the output dtype arg is not supported, and error should be raised if the user passes it.

// NB: Dropped the resizeNd variants

Tensor sparse_to_dense(const SparseTensor& self) {
Tensor sparse_to_dense(const SparseTensor& self, c10::optional<ScalarType> dtype) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new flag is completely ignored for here. Is that expected?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just for dispatch which should have same numbers of inputs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case please error out if dtype is provided, as we prefer not to have situations when user passing ignored argument.

AT_ASSERTM(cpu_tensor.dim() <= 5,
"Can't convert cpu tensor with the number of dimensions > 5");
Tensor dense_to_mkldnn(const Tensor& cpu_tensor, c10::optional<ScalarType> dtype) {
TORCH_INTERNAL_ASSERT(cpu_tensor.device().type() == DeviceType::CPU,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are these internal asserts? These args are not controled by the user? If so, you prefer a TORCH_CHECK here I think.

ideep::tensor::data_type::f32,
(cpu_tensor_cont.template data_ptr<float>()));
if (cpu_tensor.scalar_type() == ScalarType::Float) {
dtensor.feed_from(dtensor.get_dims(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function will work for any dtype provided?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the default dtype is float, there can support othe dtype, such as int8.

if (stensor.is_empty()) return cpu_tensor;
auto pub_tensor = stensor.to_public(cpu_tensor.template data_ptr<float>());
auto pub_tensor =
cpu_tensor.scalar_type() == ScalarType::Float
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here the cpu_tensor type is dtype. So it can be other things that float/bfloat right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes.

@ngimel ngimel added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label May 5, 2020
qiuxin2012 pushed a commit to qiuxin2012/pytorch that referenced this pull request Jul 27, 2020
Copy link
Contributor

@VitalyFedyunin VitalyFedyunin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Several error messaging fixes required, everything else looks good.

// NB: Dropped the resizeNd variants

Tensor sparse_to_dense(const SparseTensor& self) {
Tensor sparse_to_dense(const SparseTensor& self, c10::optional<ScalarType> dtype) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case please error out if dtype is provided, as we prefer not to have situations when user passing ignored argument.

@VitalyFedyunin
Copy link
Contributor

Please rebase entire stack to make it mergable.

@facebook-github-bot
Copy link
Contributor

Hi @XiaobingSuper!

Thank you for your pull request. We require contributors to sign our Contributor License Agreement, and yours needs attention.

You currently have a record in our system, but we do not have a signature on file.

In order for us to review and merge your code, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed module: bfloat16 module: mkldnn Related to Intel IDEEP or oneDNN (a.k.a. mkldnn) integration open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants