-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Description
The plan to move these is to follow scipy's API
For torch.lu, we would like to split it into torch.linalg.lu_factor and torch.linalg.lu_factor_ex (the former implemented as a call to the latter) with the following signatures:
def lu_factor(Tensor A, *, pivot: bool = True, out=None) -> (Tensor lu_factor, Tensor lu_pivots)
def lu_factor_ex(Tensor A, *, pivot: bool = True, check_errors=False, out=None) -> (lu (Tensor lu_factor, Tensor lu_pivots), Tensor info)If pivot=False, lu_pivots should be an empty tensor.
For torch.lu_unpack, we want to combine it with torch.linalg.lu_factor and have it as torch.linalg.lu with signature
def lu(Tensor A, *, pivot: bool = True, out=None) -> (Tensor P, Tensor L, Tensor U)where pivot controls whether we perform an LU with partial pivoting or not. If pivot=False, P should be an empty tensor.
For solve_lu, we want to update it to have signature
def solve_lu(A_lu: (Tensor, Tensor), B: Tensor, *, left: bool = True, adjoint: bool = False) -> TensorAdjoint will dispatch to trans='T' or trans='C'depending on the dtype. The side should be implemented transposing the inputs and output. If left=False and adjoint=True, then this is equivalent to solving for A.conj(). This is fine, because the LU decomposition of a conjugate matrix is given by conjugating the lu_factor matrix.
cc @jianyuh @nikitaved @pearu @mruberry @heitorschueroff @walterddr @IvanYashchuk @xwang233 @lezcano