-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Reorganize methods in Type, add CPUTypeDefault/CUDATypeDefault #11205
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Differential Revision: D9557315 Differential Version: 56407314
Differential Revision: D9557315 Differential Version: 56413294
Differential Revision: D9557315 Differential Version: 56415719
Differential Revision: D9557315 Differential Version: 56424143
Differential Revision: D9557315 Differential Version: 56437859
Differential Revision: D9561478 Differential Version: 56437861
Differential Revision: D9561478 Differential Version: 56440382
Differential Revision: D9561478 Differential Version: 56440854
Differential Revision: D9562197 Differential Version: 56442349
Differential Revision: D9562312 Differential Version: 56443146
Differential Revision: D9562312 Differential Version: 56443358
Differential Revision: D9562467 Differential Version: 56443875
Differential Revision: D9562467 Differential Version: 56445436
Differential Revision: D9562312 Differential Version: 56445886
Differential Revision: D9557315 Differential Version: 56444203
Differential Revision: D9561478 Differential Version: 56446440
Differential Revision: D9562467 Differential Version: 56447016
Differential Revision: D9562467 Differential Version: 56447216
Differential Revision: D9561478 Differential Version: 56449391
Differential Revision: D9564206 Differential Version: 56452969
Differential Revision: D9562312 Differential Version: 56453321
Differential Revision: D9564516 Differential Version: 56455053
Differential Revision: D9562467 Differential Version: 56473044
Differential Revision: D9562197 Differential Version: 56473286
Differential Revision: D9578398 Differential Version: 56517363
Differential Revision: D9578399 Differential Version: 56517362
Differential Revision: D9578734 Differential Version: 56519039
Differential Revision: D9578734 Differential Version: 56520207
Differential Revision: D9578734 Differential Version: 56526151
Differential Revision: D9581560 Differential Version: 56526196
Differential Revision: D9614321 Differential Version: 56803229
Differential Revision: D9631619 Differential Version: 56803477
| bool is_sparse() const override { | ||
| return backend() == Backend::SparseCPU || backend() == Backend::SparseCUDA; | ||
| } | ||
| bool is_distributed() const override { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
Differential Revision: D9628281 Differential Version: 56805602
Differential Revision: D9614321 Differential Version: 56805604
Differential Revision: D9631619 Differential Version: 56805603
Differential Revision: D9631619 Differential Version: 56809675
Differential Revision: D9631619 Differential Version: 56834201
|
@pytorchbot retest this please |
5 similar comments
|
@pytorchbot retest this please |
|
@pytorchbot retest this please |
|
@pytorchbot retest this please |
|
@pytorchbot retest this please |
|
@pytorchbot retest this please |
ssnl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Differential Revision: D9631619 Differential Version: 56884928
Summary: Pull Request resolved: pytorch/pytorch#11205 Our short term plan for supporting out of tree complex development requires an external library to add a custom subclass of Type without access to the code generation facilities in ATen. This commit reorganizes Type so as to minimize the amount of boilerplate you have to write when making a subclass of Type. In particular, it: - Creates a new CPUTypeDefault/CUDATypeDefault class, which you are intended to inherit from, which provides default implementations of CPU/CUDA that is layout/dtype agnostic. - Adds new getCPUAllocator() and getCUDAAllocator() functions, as a more public API to get your hands on Allocator - Adds allocator() and getDeviceFromPtr(), abstracting the device specific parts of storage() methods; these methods are now implemented in base TypeDefault. - Delete the static typeString() method, which is now dead. - Move is_cuda/is_sparse/is_distributed to TypeDefault. Reviewed By: SsnL Differential Revision: D9631619 fbshipit-source-id: 40b600d99691230e36e03eb56434c351cbc2aa3a
resolve conflict in data parallel model * master: (201 commits) Add cost inference to ConvGradient and WeightedSum operators (pytorch#10744) Move collapse dims into a single place (pytorch#11272) Fix some more warnings (pytorch#11257) Fix the batchnorm onnx exporting when affine=False Improve error message to include return types too (pytorch#11245) Check doxygen output in travis (pytorch#11124) Accept more numpy scalars as doubles (pytorch#9659) Fixed log message (pytorch#10874) Fix to distribution.__repr__ with lazy attributes (pytorch#11263) Add import export step to end to end tests Add complex hooks for out of tree complex implementation. (pytorch#11216) Unify opt flag for cmake codegen (pytorch#11227) nomnigraph - fix memory error in NN subgraph matchOp (pytorch#11127) Port PackedSequences functions to C++ (pytorch#11224) Treat numerical differences as warnings instead of errors when tracing (pytorch#11246) add a Float16UniformFill (pytorch#11123) Implement torch.tensordot (pytorch#10025) keep net type info when generating model complete net (pytorch#11032) Get rid of some uses of type() (pytorch#11215) Reorganize methods in Type, add CPUTypeDefault/CUDATypeDefault (pytorch#11205) ...
…ch#11205) Summary: Pull Request resolved: pytorch#11205 Our short term plan for supporting out of tree complex development requires an external library to add a custom subclass of Type without access to the code generation facilities in ATen. This commit reorganizes Type so as to minimize the amount of boilerplate you have to write when making a subclass of Type. In particular, it: - Creates a new CPUTypeDefault/CUDATypeDefault class, which you are intended to inherit from, which provides default implementations of CPU/CUDA that is layout/dtype agnostic. - Adds new getCPUAllocator() and getCUDAAllocator() functions, as a more public API to get your hands on Allocator - Adds allocator() and getDeviceFromPtr(), abstracting the device specific parts of storage() methods; these methods are now implemented in base TypeDefault. - Delete the static typeString() method, which is now dead. - Move is_cuda/is_sparse/is_distributed to TypeDefault. Reviewed By: SsnL Differential Revision: D9631619 fbshipit-source-id: 40b600d99691230e36e03eb56434c351cbc2aa3a
Reorganize methods in Type, add CPUTypeDefault/CUDATypeDefault
Our short term plan for supporting out of tree complex development requires an
external library to add a custom subclass of Type without access to the
code generation facilities in ATen. This commit reorganizes Type so
as to minimize the amount of boilerplate you have to write when making
a subclass of Type.
In particular, it:
intended to inherit from, which provides default implementations
of CPU/CUDA that is layout/dtype agnostic.
a more public API to get your hands on Allocator
specific parts of storage() methods; these methods are now
implemented in base TypeDefault.
Differential Revision: D9631619