Skip to content

Conversation

@BowenBao
Copy link
Collaborator

@BowenBao BowenBao commented May 13, 2019

This is work in progress due to its dependency on multiple pending PRs.

This PR should partially resolve #17531. However, ideally we shouldn't need to put cast(and reshape) node to help the conversion for loop condition.

  • Added cast node for condition values before entering loop node. The ONNX spec only accepts Bool type, while in PyTorch if the condition value is an output from other node it could potentially have any integral type.
  • Tidying up the exported ONNX loop subgraph input type & shape. According to ONNX spec, input "M" is exported as 0-d scalar tensor with type int64. input "Cond" is exported as incomplete tensor of type Bool without shape information. This is because through out the iteration, the rank of condition value is dynamic, either 0-d or 1-d, as long as it holds a single value.

@pytorchbot pytorchbot added oncall: jit Add this issue/PR to JIT oncall triage queue module: onnx Related to torch.onnx labels May 13, 2019
@jeffreyksmithjr jeffreyksmithjr added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label May 13, 2019
@zdevito zdevito removed their request for review May 14, 2019 17:01
Copy link
Member

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think some folk is adding boolean tensor to pytorch now. Shall we wait a bit for that effort?

@BowenBao
Copy link
Collaborator Author

I think some folk is adding boolean tensor to pytorch now. Shall we wait a bit for that effort?

Great to know there will be improved native boolean tensor support in pytorch. Is there a timeline for that change? I have some other local changes to better support the export for boolean scalars. I guess that can wait.
I believe for this PR most of the changes are still required to produce refined loop onnx model. I'll add more info to the description to make it clearer.

@houseroad
Copy link
Member

Fully ready should be pytorch 1.2. Let's move this change forward then.

@BowenBao BowenBao changed the title [ONNX][WIP] Improve ONNX Loop export [ONNX] Improve ONNX Loop export May 23, 2019
@pytorchbot pytorchbot added the module: tests Issues related to tests (not the torch.testing module) label May 23, 2019
@BowenBao BowenBao marked this pull request as ready for review May 23, 2019 17:30
@BowenBao BowenBao force-pushed the onnx_loop branch 2 times, most recently from 42b14fb to b851f45 Compare May 28, 2019 16:57
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@houseroad has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Member

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, two more nits

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Can we check cond_val to determine whether it's necessary to add the cast node?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added, note this won't have much use until native bool type support is improved. Currently most cases are passing in either byte or long, which still needs casting on onnx level.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@houseroad has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Member

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@houseroad has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@houseroad has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@houseroad
Copy link
Member

I am resolving the conflicts from my side. So no action is required.

@facebook-github-bot
Copy link
Contributor

@houseroad merged this pull request in 9a41f44.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: onnx Related to torch.onnx module: tests Issues related to tests (not the torch.testing module) oncall: jit Add this issue/PR to JIT oncall triage queue open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

JIT graph to ONNX Loop cond variable should be tensor(bool)

7 participants