-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Sparse Library #333
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sparse Library #333
Conversation
apaszke
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be better to move most of the C bindings to torch/csrc/sparse
torch/csrc/Module.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/autograd/variable.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/cuda/Module.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/cuda/Module.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/generic/SparseTensor.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/lib/THS/generic/THSTensor.c
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/lib/THS/generic/THSTensor.c
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/generic/SparseTensor.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
test/test_sparse.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/generic/SparseTensor.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
apaszke
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure how hard would that be, but I think we shouldn't make Tensor.cpp any bigger. It already takes ages to compile. Apart from this and minor comments from my review, looks good to me now.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/lib/THS/generic/THSTensor.c
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/lib/THS/generic/THSTensor.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/nn/functions/thnn/sparse.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
@apaszke zeming is on vacation till end of the year on the beaches of costa rica. if the last changes are minor, can you quickly locally do them and push it in? |
|
cc @alexpeys |
tools/cwrap/cwrap.py
Outdated
| def set_declaration_defaults(self, declaration): | ||
| declaration.setdefault('arguments', []) | ||
| declaration.setdefault('return', 'void') | ||
| if 'sparse' not in declaration: |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
tools/cwrap/plugins/THPPlugin.py
Outdated
| $init | ||
| $options | ||
| PyObject * $name(PyObject *self, PyObject *args, PyObject *kwargs) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/Module.cpp
Outdated
| } | ||
|
|
||
| /*** | ||
| * SPARSE STATELESS FUNCTIONS |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
|
||
| [[ | ||
| name: size | ||
| defined_if: "!IS_CUDA" |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| - arg: real value | ||
| default: AS_REAL(1) | ||
| - THTensor* other | ||
| - sparse: True |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| PyObject* sparse_tensor_classes; | ||
|
|
||
| //////////////////////////////////////////////////////////////////////////////// | ||
| // SPARSE MODULE INITIALIZATION |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| THLongTensor_zero(csr); | ||
|
|
||
| // Convert the sparse matrix to CSR format | ||
| #pragma omp parallel for private(i, h, hp0, hp1) schedule(static) if (nnz > 10000) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| test_shape(5, 6) | ||
| test_shape(10, 10, 10) | ||
| test_shape(50, 30, 20) | ||
| test_shape(5, 5, 5, 5, 5, 5) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
Merging RC1 into master
pytorch#333) Add Executor method to compile from a string for debug usage. Fix Reduction Scheduler to have TI level perf for FP16 inner dimension reductions. Fix tests to use randn() so large reductions aren't matching on inf.
pytorch#333) Add Executor method to compile from a string for debug usage. Fix Reduction Scheduler to have TI level perf for FP16 inner dimension reductions. Fix tests to use randn() so large reductions aren't matching on inf.
* Update README.md
…well as globally (pytorch#333) * Existing tests passing, still need to add per-tensor tests * Test is passing, still need to measure performance * ILP for l2norm functor
I redid this PR, since the last one was old and I did not want to rebase everything just to change up the module format. For reference: #116
Having sparse modules in csrc/Modules.cpp allows us unify calling functions that require sparse and non-sparse arguments. In the previous PR,
sparse.addmmis now justtorch.addmmand torch will call the correct functions.This is added as a "Sparse=True" flag to the Embedding Module. The modifications for the training loop is given in ebetica/examples@3c41e57
The list of implemented functions are in
torch/csrc/generic/methods/SparseTensor.cwrap