Skip to content

Conversation

@ebetica
Copy link
Contributor

@ebetica ebetica commented Dec 20, 2016

I redid this PR, since the last one was old and I did not want to rebase everything just to change up the module format. For reference: #116

Having sparse modules in csrc/Modules.cpp allows us unify calling functions that require sparse and non-sparse arguments. In the previous PR, sparse.addmm is now just torch.addmm and torch will call the correct functions.

This is added as a "Sparse=True" flag to the Embedding Module. The modifications for the training loop is given in ebetica/examples@3c41e57

The list of implemented functions are in torch/csrc/generic/methods/SparseTensor.cwrap

@ebetica ebetica mentioned this pull request Dec 20, 2016
6 tasks
Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be better to move most of the C bindings to torch/csrc/sparse

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure how hard would that be, but I think we shouldn't make Tensor.cpp any bigger. It already takes ages to compile. Apart from this and minor comments from my review, looks good to me now.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@soumith
Copy link
Contributor

soumith commented Dec 21, 2016

@apaszke zeming is on vacation till end of the year on the beaches of costa rica. if the last changes are minor, can you quickly locally do them and push it in?

@adamlerer
Copy link
Contributor

cc @alexpeys

def set_declaration_defaults(self, declaration):
declaration.setdefault('arguments', [])
declaration.setdefault('return', 'void')
if 'sparse' not in declaration:

This comment was marked as off-topic.

$init
$options
PyObject * $name(PyObject *self, PyObject *args, PyObject *kwargs)

This comment was marked as off-topic.

}

/***
* SPARSE STATELESS FUNCTIONS

This comment was marked as off-topic.


[[
name: size
defined_if: "!IS_CUDA"

This comment was marked as off-topic.

This comment was marked as off-topic.

- arg: real value
default: AS_REAL(1)
- THTensor* other
- sparse: True

This comment was marked as off-topic.

This comment was marked as off-topic.

PyObject* sparse_tensor_classes;

////////////////////////////////////////////////////////////////////////////////
// SPARSE MODULE INITIALIZATION

This comment was marked as off-topic.

THLongTensor_zero(csr);

// Convert the sparse matrix to CSR format
#pragma omp parallel for private(i, h, hp0, hp1) schedule(static) if (nnz > 10000)

This comment was marked as off-topic.

This comment was marked as off-topic.

test_shape(5, 6)
test_shape(10, 10, 10)
test_shape(50, 30, 20)
test_shape(5, 5, 5, 5, 5, 5)

This comment was marked as off-topic.

This comment was marked as off-topic.

@apaszke apaszke merged commit 59d66e6 into pytorch:master Jan 4, 2017
mrshenli pushed a commit to mrshenli/pytorch that referenced this pull request Apr 11, 2020
jjsjann123 pushed a commit to jjsjann123/pytorch that referenced this pull request Sep 23, 2020
pytorch#333)

Add Executor method to compile from a string for debug usage.  Fix Reduction Scheduler to have TI level perf for FP16 inner dimension reductions. Fix tests to use randn() so large reductions aren't matching on inf.
jjsjann123 pushed a commit to jjsjann123/pytorch that referenced this pull request Sep 24, 2020
pytorch#333)

Add Executor method to compile from a string for debug usage.  Fix Reduction Scheduler to have TI level perf for FP16 inner dimension reductions. Fix tests to use randn() so large reductions aren't matching on inf.
KsenijaS pushed a commit to KsenijaS/pytorch that referenced this pull request Dec 14, 2020
KyleCZH pushed a commit to KyleCZH/pytorch that referenced this pull request Sep 20, 2021
eellison pushed a commit to eellison/pytorch that referenced this pull request Jun 29, 2022
hubertlu-tw pushed a commit to hubertlu-tw/pytorch that referenced this pull request Nov 1, 2022
…well as globally (pytorch#333)

* Existing tests passing, still need to add per-tensor tests

* Test is passing, still need to measure performance

* ILP for l2norm functor
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants