Skip to content

Support batched SVD#2337

Closed
toslunar wants to merge 8 commits intocupy:masterfrom
toslunar:batch-svd
Closed

Support batched SVD#2337
toslunar wants to merge 8 commits intocupy:masterfrom
toslunar:batch-svd

Conversation

@toslunar
Copy link
Copy Markdown
Member

No description provided.

@toslunar toslunar added the cat:enhancement Improvements to existing features label Jul 25, 2019
@toslunar
Copy link
Copy Markdown
Member Author

  • cusolverDn<t>gesvdjBatched() only supports m <= 32 and n <= 32.
  • cusolverDn<t>gesvdaStridedBatched() only supports full_matrices=False because it's a method for a tall skinny matrix.

Can we use the second one while a stands for "approximation"? (I'd rather not.)

Should we implement the other cases using a trivial loop of cusolverDn<t>gesvd()? I'm willing to work on it if it'll be better than a loop outside cupy.linalg.svd.

@hvy
Copy link
Copy Markdown
Member

hvy commented Aug 9, 2019

I don't think we want to use the approximation for obvious reasons such as outputs differing from naively looping over the batch with cupy.linalg.svd. It might be worth considering exposing this cuSOLVER API in CuPy through some different interface though, that doesn't exist in NumPy.

With that said, using gesvdjBatched when possible and otherwise loop sounds reasonable to me. That's probably "more or less" what this cuSOLVER routines does anyway.

gesvdjBatched performs gesvdj on each matrix. It requires that all matrices are of the same size m,n no greater than 32 and are packed in contiguous way,

(https://docs.nvidia.com/cuda/cusolver/index.html#cuds-lt-t-gt-gesvdjbatch)

Regaring if it's better than looping over cupy.linglag.svd, we should do some benchmarking along the way since it's difficult to tell beforehand?

@hvy
Copy link
Copy Markdown
Member

hvy commented Aug 9, 2019

By the way, I took a look at NumPy and it seems like they're looping manually too? Not really sure if this is of any interest but just leaving a reference. https://github.com/numpy/numpy/blob/master/numpy/linalg/umath_linalg.c.src#L2762-L2795

@kmaehashi
Copy link
Copy Markdown
Member

@toslunar Kindly ping.

With that said, using gesvdjBatched when possible and otherwise loop sounds reasonable to me. That's probably "more or less" what this cuSOLVER routines does anyway.

I agree with this idea (although documentation may necessary depending on how it affects the entire performance).

@toslunar
Copy link
Copy Markdown
Member Author

toslunar commented Apr 1, 2020

It's better to make the feature (gesvdjBatched) a different function like #3192.

toslunar added a commit to toslunar/cupy that referenced this pull request Apr 1, 2020
@kmaehashi kmaehashi assigned takagi and unassigned hvy Apr 14, 2020
@asi1024 asi1024 modified the milestones: v8.0.0b2, v7.4.0, v8.0.0b3 Apr 23, 2020
@takagi
Copy link
Copy Markdown
Contributor

takagi commented May 19, 2020

It is supposed that now batched cupy.cusolver.gesvdj that uses cusolverDn<t>gesvdjBatched is merged in #3247 and remaining is to fix cupy.linalg.svd to support batched svd using cupy.cusolver layer.

@alexbo1
Copy link
Copy Markdown

alexbo1 commented May 26, 2020

I need to solve a large number of small symmetric/Hermitian Eigen problems.
Therefore, it would be very helpful to have support for batched Jacobian cuSolver routines as well:

([https://docs.nvidia.com/cuda/cusolver/index.html#cuds-lt-t-gt-syevjbatch])

@emcastillo emcastillo modified the milestones: v8.0.0b3, v8.0.0b4 May 29, 2020
@kmaehashi
Copy link
Copy Markdown
Member

I talked with @toslunar and agreed to close this. Let's continue the discussion in #3470.

@kmaehashi kmaehashi closed this Jun 22, 2020
@toslunar toslunar deleted the batch-svd branch June 22, 2020 10:05
@leofang leofang mentioned this pull request Feb 5, 2021
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cat:enhancement Improvements to existing features

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants