Skip to content

Conversation

@tugsbayasgalan
Copy link
Contributor

@tugsbayasgalan tugsbayasgalan commented Sep 12, 2024

@pytorch-bot
Copy link

pytorch-bot bot commented Sep 12, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/135918

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit dadc6db with merge base 09519eb (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@tugsbayasgalan tugsbayasgalan added the topic: docs topic category label Sep 12, 2024
@tugsbayasgalan
Copy link
Contributor Author

@tugsbayasgalan has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@tugsbayasgalan
Copy link
Contributor Author

@tugsbayasgalan has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@tugsbayasgalan
Copy link
Contributor Author

@tugsbayasgalan has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

tugsbayasgalan added a commit that referenced this pull request Sep 12, 2024
ghstack-source-id: d67cdc3
Pull Request resolved: #135918
@tugsbayasgalan
Copy link
Contributor Author

@tugsbayasgalan has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

Copy link
Contributor

@avikchaudhuri avikchaudhuri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, suggested edits for grammar / style.


.. _Training Export:

Training Export
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a good title. Maybe something like "Export for Training and Inference"?


In this API, we produce the most generic IR that contains all ATen operators
(including both functional and non-functional) which can be used to train in
eager PyTorch Autograd. This API is intended for PT2 quantization training use cases
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure why you need to mention quantization training. E.g., you'd want this for normal eager training too.

In this API, we produce the most generic IR that contains all ATen operators
(including both functional and non-functional) which can be used to train in
eager PyTorch Autograd. This API is intended for PT2 quantization training use cases
and will soon be the default IR of torch.export.export in the near future. To read further about
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

soon vs. near future, pick one

)
Range constraints: {}
Here you can see that, we kept `conv2d` op in the IR while decomposing the rest. Now the IR is an functional IR
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Edited:

Here you can see that we kept conv2d op in the IR while decomposing the rest. Now the IR is a functional IR
containing core aten operators except for conv2d.

You can do even more customization by directly registering your chosen decomposition behaviors.

as :func:`export` except for the operators in the graph. You can see that we captured `batch_norm` in the most general
form. This op is non-functional and will be lowered to different ops when running under inference mode.

You can also go from this IR to an inference IR via :func:`run_decompositions` with arbitrary customizations
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Edited:

From the above output, you can see that :func:export_for_training produces pretty much the same ExportedProgram
as :func:export except for the operators in the graph. You can see that we captured batch_norm in the most general
form. This op is non-functional and will be lowered to different ops when running inference.

You can also go from this IR to an inference IR via :func:run_decompositions with arbitrary customizations.

the motivation behind this change, please refer to
https://dev-discuss.pytorch.org/t/why-pytorch-does-not-need-a-new-standardized-operator-set/2206

With this API, and :func:`run_decompositions()`, you should be able to get any inference IR with
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Edited:

When this API is combined with :func:run_decompositions(), you should be able to get inference IR with
any desired decomposition behavior.

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Sep 13, 2024
@tugsbayasgalan
Copy link
Contributor Author

@tugsbayasgalan has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

tugsbayasgalan added a commit that referenced this pull request Sep 13, 2024
ghstack-source-id: fe2a363
Pull Request resolved: #135918
@tugsbayasgalan
Copy link
Contributor Author

@tugsbayasgalan has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@tugsbayasgalan
Copy link
Contributor Author

@tugsbayasgalan has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@tugsbayasgalan
Copy link
Contributor Author

@tugsbayasgalan has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@tugsbayasgalan
Copy link
Contributor Author

@tugsbayasgalan has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

1 similar comment
@tugsbayasgalan
Copy link
Contributor Author

@tugsbayasgalan has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

tugsbayasgalan added a commit that referenced this pull request Sep 14, 2024
ghstack-source-id: 69430a3
Pull Request resolved: #135918
tugsbayasgalan added a commit that referenced this pull request Sep 15, 2024
ghstack-source-id: e16f6c9
Pull Request resolved: #135918
@tugsbayasgalan
Copy link
Contributor Author

@tugsbayasgalan has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@pytorchbot merge -f 'Landed internally'

(Initiating merge automatically since Phabricator Diff has merged, using force because this PR might not pass merge_rules.json but landed internally)

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use -f as last resort and instead consider -i/--ignore-current to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: export topic: docs topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants