-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Allow removing modules #692
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Contributor
|
after debating probably for like more than a month, going back and forth a hundred times, @colesbury vetoed this. So bringing closure to the PR |
bddppq
pushed a commit
to bddppq/pytorch
that referenced
this pull request
Apr 17, 2018
…9c90c8 Previous import was a4dcc47791eb127652f5aaddd51d8896d446a067 Included changes: - **[985af3f](onnx/onnx@985af3f)**: Update PythonAPIOverview.md (pytorch#738) <Dmytro Dzhulgakov> - **[b69be33](onnx/onnx@b69be33)**: Add backend test for upsample (pytorch#729) <Sebastian Meßmer> - **[0d9496e](onnx/onnx@0d9496e)**: Input test data of concat op should be float (pytorch#711) <Changming Sun> - **[20bcb8b](onnx/onnx@20bcb8b)**: Fix the spec for batchnorm and instancenorm (pytorch#733) <Lu Fang> - **[c9f825f](onnx/onnx@c9f825f)**: Refine a little bit about op spec. (pytorch#666) <Ke Zhang> - **[a484eb2](onnx/onnx@a484eb2)**: Fix an error in Conv doc (pytorch#731) <Lu Fang> - **[7410cc4](onnx/onnx@7410cc4)**: Fix incorrect package output paths (pytorch#730) <bddppq> - **[be546e2](onnx/onnx@be546e2)**: Improve optimizer's API and docs (pytorch#713) <Lu Fang> - **[c61506f](onnx/onnx@c61506f)**: Fix the shape inference python API (pytorch#716) <Lu Fang> - **[e9d4134](onnx/onnx@e9d4134)**: Fix cmake on windows when not building python extension (pytorch#728) <bddppq> - **[72187aa](onnx/onnx@72187aa)**: Add value_info support in make_graph (pytorch#726) <Lu Fang> - **[67b7d89](onnx/onnx@67b7d89)**: Fix gen_proto in cmake (pytorch#719) <bddppq> - **[fcb4ae3](onnx/onnx@fcb4ae3)**: docs rewording: Important Python Functions -> Python API Overview (pytorch#721) <anderspapitto> - **[24275d6](onnx/onnx@24275d6)**: Ignore .eggs directory when doing lint (pytorch#722) <bddppq> - **[54be8fa](onnx/onnx@54be8fa)**: Use cmake3 if it's available (pytorch#718) <bddppq> - **[b8c4238](onnx/onnx@b8c4238)**: Add python function docs (pytorch#714) <Lu Fang> - **[e177493](onnx/onnx@e177493)**: Remove unused cmake utils (pytorch#712) <bddppq> - **[72d6ad6](onnx/onnx@72d6ad6)**: Remove pycmd from CMake (pytorch#710) <bddppq> - **[93f0d40](onnx/onnx@93f0d40)**: Fix windows local build (pytorch#709) <Raymond Yang> - **[6734224](onnx/onnx@6734224)**: CMake fixes and setup.py cleanup (pytorch#706) <bddppq> - **[7f6a4fd](onnx/onnx@7f6a4fd)**: Add docs to explain important functions in ONNX Infra (pytorch#682) <Lu Fang> - **[f0f6b3d](onnx/onnx@f0f6b3d)**: fix hardmax test cases make output dtype same as input (pytorch#705) <Wenhao Hu> - **[c970f0c](onnx/onnx@c970f0c)**: Fix the Dummy backend (pytorch#701) <Lu Fang> - **[2af45df](onnx/onnx@2af45df)**: setup.py uses cmake build system (pytorch#606) <anderspapitto> - **[dfcaade](onnx/onnx@dfcaade)**: clean up unused variable left by removing consumed_input (pytorch#697) <bddppq> - **[accfc74](onnx/onnx@accfc74)**: Remove incorrect backend test (pytorch#700) <Lu Fang> - **[e558732](onnx/onnx@e558732)**: add max inclusive version to defs.get_schema function (pytorch#695) <Wenhao Hu> - **[16f02eb](onnx/onnx@16f02eb)**: add API to add domain to min/max version for extension. (pytorch#694) <Ke Zhang> - **[3e560dd](onnx/onnx@3e560dd)**: Fix doc for initializer (pytorch#690) <bddppq> - **[6cc4f53](onnx/onnx@6cc4f53)**: Add model save function (pytorch#692) <Lu Fang> - **[21eaf9b](onnx/onnx@21eaf9b)**: Changing the string discussing versions in operator specifications. (pytorch#691) <Niklas Gustafsson> - **[3b0cdf4](onnx/onnx@3b0cdf4)**: Minor code quality improvements in optimizer/ (pytorch#612) <Sebastian Meßmer> - **[641f126](onnx/onnx@641f126)**: Fix Gemm doc wording (pytorch#689) <bddppq> - **[4a0ec75](onnx/onnx@4a0ec75)**: Clarifies installation error message when external protobuf dependencies are missing (pytorch#684) <Daniel J. H> - **[960a2c3](onnx/onnx@960a2c3)**: Check outputs dtype in backend tests (pytorch#567) <bddppq> - **[1d7dee4](onnx/onnx@1d7dee4)**: Fix Average pool test cases converted from PyTorch (pytorch#677) <Lu Fang> - **[36d7fff](onnx/onnx@36d7fff)**: Fix Attribute default value pybind11 binding (pytorch#671) <bddppq> - **[0536866](onnx/onnx@0536866)**: git ignore .pytest_cache (pytorch#674) <bddppq> - **[afc84ac](onnx/onnx@afc84ac)**: Update README.md (pytorch#672) <Dmytro Dzhulgakov> - **[9d2b530](onnx/onnx@9d2b530)**: Revert "[Typing 1/3] Setup mypy type checker (pytorch#607)" (pytorch#667) <bddppq> - **[086727e](onnx/onnx@086727e)**: [Typing 1/3] Setup mypy type checker (pytorch#607) <Sebastian Meßmer> - **[5716e20](onnx/onnx@5716e20)**: Convert all Node tests to Model tests (pytorch#651) <bddppq> - **[6fe932a](onnx/onnx@6fe932a)**: Replace unittest.skip with custom exception (pytorch#659) <Dmytro Dzhulgakov> - **[ecac1c1](onnx/onnx@ecac1c1)**: Merge Rel 1.1.0 branch into master (pytorch#657) <Anirudh> - **[5cb999d](onnx/onnx@5cb999d)**: Minor cleanups to shape inference (pytorch#653) <anderspapitto> - **[f4acf28](onnx/onnx@f4acf28)**: Remove allowconsumed enforceconsumed from op schema. (pytorch#617) <Ke Zhang> - **[a8e4648](onnx/onnx@a8e4648)**: Adjust link flags when built in Windows Debug mode (pytorch#647) <Yinghai Lu> - **[7c009fe](onnx/onnx@7c009fe)**: Fix lint error in optimizer test (pytorch#656) <bddppq> - **[063d12f](onnx/onnx@063d12f)**: Fix optimizer split pass for models with constant output (pytorch#652) <bddppq>
mcarilli
pushed a commit
to mcarilli/pytorch
that referenced
this pull request
Mar 18, 2021
This PR introduces two specialized operations: aten::autocast_to_fp16 and aten::autocast_to_fp32. The new operations are required for correctness (see https://dev-discuss.pytorch.org/t/jit-scripting-autocast/139). A bonus is that the IR is cleaner and easier to read (no need to create a bunch of dummy constants to fill in all the aten::to parameters): Before Autocast: graph(%a.1 : Tensor, %b.1 : Tensor, %c : Tensor, %d.1 : Tensor): %4 : bool = prim::Constant[value=1]() %5 : __torch__.torch.cuda.amp.autocast_mode.autocast = prim::CreateObject() = prim::SetAttr[name="_enabled"](%5, %4) %7 : __torch__.torch.cuda.amp.autocast_mode.autocast = prim::Enter(%5) %e.1 : Tensor = aten::mm(%a.1, %b.1) # test1.py:16:12 %f.1 : Tensor = aten::mm(%d.1, %e.1) # test1.py:17:12 %10 : Tensor = prim::Exit(%5) %11 : (Tensor, Tensor) = prim::TupleConstruct(%e.1, %f.1) return (%11) After Autocast: graph(%a.1 : Tensor, %b.1 : Tensor, %c : Tensor, %d.1 : Tensor): %4 : bool = prim::Constant[value=1]() %5 : __torch__.torch.cuda.amp.autocast_mode.autocast = prim::CreateObject() = prim::SetAttr[name="_enabled"](%5, %4) %7 : __torch__.torch.cuda.amp.autocast_mode.autocast = prim::Enter(%5) %13 : Tensor = aten::autocast_to_fp16(%b.1) %14 : Tensor = aten::autocast_to_fp16(%a.1) %e.1 : Tensor = aten::mm(%14, %13) # test1.py:16:12 %15 : Tensor = aten::autocast_to_fp16(%e.1) %16 : Tensor = aten::autocast_to_fp16(%d.1) %f.1 : Tensor = aten::mm(%16, %15) # test1.py:17:12 %10 : Tensor = prim::Exit(%5) %11 : (Tensor, Tensor) = prim::TupleConstruct(%e.1, %f.1) return (%11)
KyleCZH
pushed a commit
to KyleCZH/pytorch
that referenced
this pull request
Sep 20, 2021
We're moving towards enabling the CPU compiler backend, which depends on LLVM. Locally I measured that this adds 11 MB to the wheel (27 for the uncompressed libraries). I also checked that it coexists nicely with numba/llvmlite (e.g., no symbol conflicts when both are loaded).
dstaay-fb
added a commit
to dstaay-fb/pytorch
that referenced
this pull request
Oct 6, 2022
Summary: X-link: meta-pytorch/torchrec#692 Pull Request resolved: pytorch#85781 X-link: meta-pytorch/tnt#207 X-link: meta-pytorch/torchsnapshot#76 The flow logic around torch.dist imports results in large number of pyre errors, In spirit of fail fast its best just raise importError as opposed to silently fail to import bindings and let user only find out when actually trying to run library methods/functions. ** NOTE ** : this breaks backward compatibility w/ respect to users who previously did not actually have torch.distributed we able to import the library without error. However, reality is they couldn't actually use library, so first call into library it would fail. So only way user code will actually be affected is if they had a dead import (ie. imported torch.distributed but never actually called library). also removed the 10's-100's of unused pyre ignores no longer required. Test Plan: existing unit tests Reviewed By: mrshenli Differential Revision: D39842273 fbshipit-source-id: 8b204c87d194fe1a44158ca21561fb78ee88e6ca
facebook-github-bot
pushed a commit
that referenced
this pull request
Oct 7, 2022
Summary: X-link: meta-pytorch/torchrec#692 Pull Request resolved: #85781 X-link: meta-pytorch/tnt#207 X-link: meta-pytorch/torchsnapshot#76 The flow logic around torch.dist imports results in large number of pyre errors, In spirit of fail fast its best just raise importError as opposed to silently fail to import bindings and let user only find out when actually trying to run library methods/functions. ** NOTE ** : this breaks backward compatibility w/ respect to users who previously did not actually have torch.distributed we able to import the library without error. However, reality is they couldn't actually use library, so first call into library it would fail. So only way user code will actually be affected is if they had a dead import (ie. imported torch.distributed but never actually called library). also removed the 10's-100's of unused pyre ignores no longer required. Test Plan: existing unit tests Reviewed By: mrshenli Differential Revision: D39842273 fbshipit-source-id: f7bf6a237912bb7bfd22736e7404d58eacad7341
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Addresses #358