-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[ONNX] Create an optimize method in ONNXProgram
#137667
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/137667
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 9bbfae5 with merge base 7408742 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Move optimization from the export call to the `optimize()` method in ONNXProgram. Users can call `optimize()` before calling `save()` to save the model. The change is made to allow users to control optimization behavior. It is not always applied in torch.onnx.export so the core export logic can be more robust. ghstack-source-id: 17ddf6a Pull Request resolved: #137667
optimize method in ONNXProgram
|
Although this is good for developer to maintain and debug, from user perspective, it's a change of usage in a way that now they have to call an extra step to have the same performance as torchscript. I think optimize by default might be a better choice. |
|
Issue with this is the optimizer may fail sometimes, and it will take a much larger effort to bring it to the point where it will run successfully on all possible dtypes, most models etc. So having an option would be good imo. We could add an |
Move optimization from the export call to the `optimize()` method in ONNXProgram.
Users can call `optimize()` before calling `save()` to save the model. The change is made to allow users to control optimization behavior. It is not always applied in torch.onnx.export so the core export logic can be more robust.
Example
```python
onnx_program = torch.onnx.export(..., dynamo=True)
onnx_program.optimize()
onnx_program.save("model.onnx")
```
[ghstack-poisoned]
Move optimization from the export call to the `optimize()` method in ONNXProgram. Users can call `optimize()` before calling `save()` to save the model. The change is made to allow users to control optimization behavior. It is not always applied in torch.onnx.export so the core export logic can be more robust. ghstack-source-id: 70af16b Pull Request resolved: #137667
|
Updated to provide the optimize=False api in torch.onnx.export and make it to call this method. When the optimizer become stable, we can default it to True. Or we can educate users to leverage the ONNXProgram object for the most flexibility. The torch.onnx.export(..., f) call (with path) should just be a transitional thing |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
@pytorchbot merge |
Merge failedReason: Command Details for Dev Infra teamRaised by workflow job |
|
The merge job was canceled or timed out. This most often happen if two merge requests were issued for the same PR, or if merge job was waiting for more than 6 hours for tests to finish. In later case, please do not hesitate to reissue the merge command |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Stack from ghstack (oldest at bottom):
optimizemethod in ONNXProgram #137667Move optimization from the export call to the
optimize()method in ONNXProgram.Users can call
optimize()before callingsave()to save the model. Right now if users setoptimize=Trueintorch.onnx.exportit will have the same effect as callingoptimize(), but in the future we can evolve the method to be more flexible (e.g. target aware etc.)Example