Skip to content

Conversation

@ezyang
Copy link
Contributor

@ezyang ezyang commented Sep 3, 2018

Reorganize methods in Type, add CPUTypeDefault/CUDATypeDefault

Our short term plan for supporting out of tree complex development requires an
external library to add a custom subclass of Type without access to the
code generation facilities in ATen. This commit reorganizes Type so
as to minimize the amount of boilerplate you have to write when making
a subclass of Type.

In particular, it:

  • Creates a new CPUTypeDefault/CUDATypeDefault class, which you are
    intended to inherit from, which provides default implementations
    of CPU/CUDA that is layout/dtype agnostic.
  • Adds new getCPUAllocator() and getCUDAAllocator() functions, as
    a more public API to get your hands on Allocator
  • Adds allocator() and getDeviceFromPtr(), abstracting the device
    specific parts of storage() methods; these methods are now
    implemented in base TypeDefault.
  • Delete the static typeString() method, which is now dead.
  • Move is_cuda/is_sparse/is_distributed to TypeDefault.

Differential Revision: D9631619

ezyang added 30 commits August 29, 2018 09:11
Differential Revision: D9557315
Differential Version: 56407314
Differential Revision: D9557315
Differential Version: 56413294
Differential Revision: D9557315
Differential Version: 56415719
Differential Revision: D9557315
Differential Version: 56424143
Differential Revision: D9557315
Differential Version: 56437859
Differential Revision: D9561478
Differential Version: 56437861
Differential Revision: D9561478
Differential Version: 56440382
Differential Revision: D9561478
Differential Version: 56440854
Differential Revision: D9562197
Differential Version: 56442349
Differential Revision: D9562312
Differential Version: 56443146
Differential Revision: D9562312
Differential Version: 56443358
Differential Revision: D9562467
Differential Version: 56443875
Differential Revision: D9562467
Differential Version: 56445436
Differential Revision: D9562312
Differential Version: 56445886
Differential Revision: D9557315
Differential Version: 56444203
Differential Revision: D9561478
Differential Version: 56446440
Differential Revision: D9562467
Differential Version: 56447016
Differential Revision: D9562467
Differential Version: 56447216
Differential Revision: D9561478
Differential Version: 56449391
Differential Revision: D9564206
Differential Version: 56452969
Differential Revision: D9562312
Differential Version: 56453321
Differential Revision: D9564516
Differential Version: 56455053
Differential Revision: D9562467
Differential Version: 56473044
Differential Revision: D9562197
Differential Version: 56473286
Differential Revision: D9578398
Differential Version: 56517363
Differential Revision: D9578399
Differential Version: 56517362
Differential Revision: D9578734
Differential Version: 56519039
Differential Revision: D9578734
Differential Version: 56520207
Differential Revision: D9578734
Differential Version: 56526151
Differential Revision: D9581560
Differential Version: 56526196
Differential Revision: D9614321
Differential Version: 56803229
Differential Revision: D9631619
Differential Version: 56803477
bool is_sparse() const override {
return backend() == Backend::SparseCPU || backend() == Backend::SparseCUDA;
}
bool is_distributed() const override {

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

Differential Revision: D9628281
Differential Version: 56805602
Differential Revision: D9614321
Differential Version: 56805604
Differential Revision: D9631619
Differential Version: 56805603
Differential Revision: D9631619
Differential Version: 56809675
Differential Revision: D9631619
Differential Version: 56834201
@ezyang
Copy link
Contributor Author

ezyang commented Sep 4, 2018

@pytorchbot retest this please

5 similar comments
@ezyang
Copy link
Contributor Author

ezyang commented Sep 4, 2018

@pytorchbot retest this please

@ezyang
Copy link
Contributor Author

ezyang commented Sep 4, 2018

@pytorchbot retest this please

@ezyang
Copy link
Contributor Author

ezyang commented Sep 4, 2018

@pytorchbot retest this please

@ezyang
Copy link
Contributor Author

ezyang commented Sep 4, 2018

@pytorchbot retest this please

@ezyang
Copy link
Contributor Author

ezyang commented Sep 4, 2018

@pytorchbot retest this please

@ezyang ezyang changed the base branch from export-D9614321 to master September 4, 2018 20:16
Copy link
Collaborator

@ssnl ssnl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Differential Revision: D9631619
Differential Version: 56884928
zdevito pushed a commit to zdevito/ATen that referenced this pull request Sep 5, 2018
Summary:
Pull Request resolved: pytorch/pytorch#11205

Our short term plan for supporting out of tree complex development requires an
external library to add a custom subclass of Type without access to the
code generation facilities in ATen.  This commit reorganizes Type so
as to minimize the amount of boilerplate you have to write when making
a subclass of Type.

In particular, it:
- Creates a new CPUTypeDefault/CUDATypeDefault class, which you are
  intended to inherit from, which provides default implementations
  of CPU/CUDA that is layout/dtype agnostic.
- Adds new getCPUAllocator() and getCUDAAllocator() functions, as
  a more public API to get your hands on Allocator
- Adds allocator() and getDeviceFromPtr(), abstracting the device
  specific parts of storage() methods; these methods are now
  implemented in base TypeDefault.
- Delete the static typeString() method, which is now dead.
- Move is_cuda/is_sparse/is_distributed to TypeDefault.

Reviewed By: SsnL

Differential Revision: D9631619

fbshipit-source-id: 40b600d99691230e36e03eb56434c351cbc2aa3a
petrex pushed a commit to petrex/pytorch that referenced this pull request Sep 5, 2018
resolve conflict in data parallel model
* master: (201 commits)
  Add cost inference to ConvGradient and WeightedSum operators (pytorch#10744)
  Move collapse dims into a single place (pytorch#11272)
  Fix some more warnings (pytorch#11257)
  Fix the batchnorm onnx exporting when affine=False
  Improve error message to include return types too (pytorch#11245)
  Check doxygen output in travis (pytorch#11124)
  Accept more numpy scalars as doubles (pytorch#9659)
  Fixed log message (pytorch#10874)
  Fix to distribution.__repr__ with lazy attributes (pytorch#11263)
  Add import export step to end to end tests
  Add complex hooks for out of tree complex implementation. (pytorch#11216)
  Unify opt flag for cmake codegen (pytorch#11227)
  nomnigraph - fix memory error in NN subgraph matchOp (pytorch#11127)
  Port PackedSequences functions to C++ (pytorch#11224)
  Treat numerical differences as warnings instead of errors when tracing (pytorch#11246)
  add a Float16UniformFill (pytorch#11123)
  Implement torch.tensordot (pytorch#10025)
  keep net type info when generating model complete net (pytorch#11032)
  Get rid of some uses of type() (pytorch#11215)
  Reorganize methods in Type, add CPUTypeDefault/CUDATypeDefault (pytorch#11205)
  ...
PenghuiCheng pushed a commit to PenghuiCheng/pytorch that referenced this pull request Sep 11, 2018
…ch#11205)

Summary:
Pull Request resolved: pytorch#11205

Our short term plan for supporting out of tree complex development requires an
external library to add a custom subclass of Type without access to the
code generation facilities in ATen.  This commit reorganizes Type so
as to minimize the amount of boilerplate you have to write when making
a subclass of Type.

In particular, it:
- Creates a new CPUTypeDefault/CUDATypeDefault class, which you are
  intended to inherit from, which provides default implementations
  of CPU/CUDA that is layout/dtype agnostic.
- Adds new getCPUAllocator() and getCUDAAllocator() functions, as
  a more public API to get your hands on Allocator
- Adds allocator() and getDeviceFromPtr(), abstracting the device
  specific parts of storage() methods; these methods are now
  implemented in base TypeDefault.
- Delete the static typeString() method, which is now dead.
- Move is_cuda/is_sparse/is_distributed to TypeDefault.

Reviewed By: SsnL

Differential Revision: D9631619

fbshipit-source-id: 40b600d99691230e36e03eb56434c351cbc2aa3a
@soumith soumith deleted the export-D9631619 branch February 21, 2019 12:09
@ezyang ezyang added the merged label Jun 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants