-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Description
WARNING: BEFORE STARTING ON A STRUCTURED KERNEL PORT, CHECK OPEN PULL REQUESTS TO SEE IF SOMEONE IS WORKING ON IT ALREADY. https://github.com/pytorch/pytorch/pulls?q=is%3Apr+is%3Aopen+structured+in%3Atitle
Here is the list of all functions which are immediately eligible to be ported to structured (as in, they have both a functional and out version with exactly the same signatures):
- abs (Anjali)
- absolute
- angle
- sgn
- conj
- acos Port all unary float functions to structured #56082
- arccos
- add.Tensor
- _add_relu.Tensor
- addmv (
addmv: port to structured kernels, improve error checks #55746) - addr
- all.dim
- all.dimname
- any.dim
- any.dimname
- argmax
- argmin
- acosh (
acosh: port to structured kernel #55540) - arccosh
- asinh Port all unary float functions to structured #56082
- arcsinh
- atanh
- arctanh
- asin
- arcsin
- atan Port all unary float functions to structured #56082
- arctan
- baddbmm [Structured Kernels] Port for
baddbmmandbmm#64805 - bernoulli
- binary_cross_entropy
- binary_cross_entropy_backward
- bitwise_not
- copysign.Tensor
copysign: port to structured kernel #55040 - logical_not
- logical_xor
- logical_and
- logical_or
- bmm [Structured Kernels] Port for
baddbmmandbmm#64805 - _bmm
- cat
- cat.names
- ceil (
ceil: port to structured #57589) - clamp (Add OptionalRef;
clamp: port to structured kernel #61361) - clamp_max
- clamp_min
- clip
- complex
- polar
- cos (
cos: port to structured kernel #55564) - cosh (
cosh: port to structured kernel #55563) - cummax
- cummax.dimname
- cummin
- cummin.dimname
- cumprod
- cumprod.dimname
- cumsum
- cumsum.dimname
- diff
- div.Tensor
- div.Tensor_mode
- divide.Tensor
- divide.Tensor_mode
- true_divide.Tensor
- dot
- vdot
- row_stack
- erf Port all unary float functions to structured #56082
- erfc Port all unary float functions to structured #56082
- exp Port all unary float functions to structured #56082
- exp2 Port all unary float functions to structured #56082
- expm1 Port all unary float functions to structured #56082
- floor (
floor: port to structured #57587) - floor_divide
- frac
- gcd (
gcd: port to structured #57624) - lcm (
lcm: port to structured #57628) - _fft_r2c
- _fft_c2r
- _fft_c2c ([WIP] Migrate _fft_c2c_mkl to structured kernel #55730)
- inverse
- kron
- kthvalue
- kthvalue.dimname
- nan_to_num
- ldexp.Tensor
- log Port all unary float functions to structured #56082
- log10 Port all unary float functions to structured #56082
- log1p Port all unary float functions to structured #56082
- log2 Port all unary float functions to structured #56082
- logaddexp (
logaddexpandlogaddexp2: port to structured #57629) - logaddexp2 (
logaddexpandlogaddexp2: port to structured #57629) - xlogy.Tensor (xlogy: Port to structured #60814)
- xlogy.Scalar_Self (xlogy: Port to structured #60814)
- xlogy.Scalar_Other (xlogy: Port to structured #60814)
- _logcumsumexp
- logcumsumexp
- logcumsumexp.dimname
- _log_softmax (Port
log_softmaxto structured kernel #57374) - _log_softmax_backward_data @SplitInfinity
- logsumexp
- logsumexp.names
- matmul
- matrix_power
- _compute_linear_combination
- max.dim
- max.names_dim
- amax Port
amaxto structured kernel #72124 - mean.dim
- mean.names_dim
- median.dim
- median.names_dim
- nanmedian.dim
- nanmedian.names_dim
- min.dim
- min.names_dim
- amin
- mm (port mm to structure kernel #57755)
- mode
- mode.dimname
- mul.Tensor
- multiply.Tensor
- mv
- narrow_copy
- native_batch_norm
- batch_norm_elemt
- rad2deg
- deg2rad
- reciprocal Port all unary float functions to structured #56082
- neg
neg: port to structure kernel #57212 - negative
- round
- rsqrt
- silu (
silu: port to structured #58050) - sigmoid Port all unary float functions to structured #56082
- logit
- sin
- sinc
- sinh (
sinh: port to structured kernel #55538) - sspaddmm
- stack
- _stack
- hstack
- vstack
- dstack
- sum.dim_IntList
- sum.dim_DimnameList
- nansum.dim_IntList
- sqrt Port all unary float functions to structured #56082
- square (port
squareto structured #58266) - std.dim
- std.names_dim
- prod.dim_int
- prod.dim_Dimname
- tan Port all unary float functions to structured #56082
- tanh Port all unary float functions to structured #56082
- tensordot
- threshold
- trunc (
trunc: port to structured #57350) - fix
- var.dim
- var.names_dim
- norm.ScalarOpt_dim_dtype
- norm.ScalarOpt_dim
- norm.names_ScalarOpt_dim_dtype
- norm.names_ScalarOpt_dim
- frexp.Tensor
- frobenius_norm.dim
- nuclear_norm
- nuclear_norm.dim
- sub.Tensor
- subtract.Tensor
- heaviside
- addmm (
addmm: port to structured kernel #57417) - hspmm
- eq.Scalar (Port
eqkernel to structured kernels. #60177) - eq.Tensor (Port
eqkernel to structured kernels. #60177) - bitwise_and.Tensor (bitwise_and: Port to structured #60813)
- bitwise_and.Scalar (bitwise_and: Port to structured #60813)
- bitwise_or.Tensor
- bitwise_or.Scalar
- bitwise_xor.Tensor (bitwise_xor: Port to structured #60812)
- bitwise_xor.Scalar (bitwise_xor: Port to structured #60812)
- atan2
atan2: port to structured kernel #55130 - tril
- triu
- digamma Port all unary float functions to structured #56082
- renorm
- lerp.Scalar
- lerp.Tensor
- fmod.Scalar (fmod: Port to structured #60809)
- fmod.Tensor (fmod: Port to structured #60809)
- remainder.Scalar
- remainder.Tensor
- addbmm (Port addbmm to structured kernels #60647)
- addcdiv
- diag
- cross
- ne.Scalar
- ne.Tensor
- not_equal.Scalar
- not_equal.Tensor
- ge.Scalar
- ge.Tensor
- greater_equal.Scalar
- greater_equal.Tensor
- le.Scalar
- le.Tensor
- less_equal.Scalar
- less_equal.Tensor
- gt.Scalar
- gt.Tensor
- greater.Scalar
- greater.Tensor
- lt.Scalar
- lt.Tensor
- less.Scalar
- less.Tensor
- take
- take_along_dim
- index_select
- index_select.dimname
- masked_select
- nonzero (@krshrimali)
- gather
- gather.dimname
- addcmul
- lstsq
- triangular_solve Port triangular_solve to structured kernel #61857
- symeig
- eig
- svd
- cholesky
- cholesky_solve
- solve
- cholesky_inverse
- qr
- geqrf
- orgqr
- ormqr
- lu_solve
- multinomial
- lgamma Port all unary float functions to structured #56082
- polygamma
- erfinv Port all unary float functions to structured #56082
- i0
- sign (
sign: port to structured #57588) - signbit (Port
signbitto structured kernel #57936) - histc
- hypot (
hypot: port to structured #57627) - igamma (
igammaandigammac: port to structured #57626) - igammac (
igammaandigammac: port to structured #57626) - nextafter (
nextafter: port to structured #57625) - fmin
- fmax
- maximum (
maximumandminimum: port to structured #57630) - max.other
- minimum (
maximumandminimum: port to structured #57630) - min.other
- quantile.scalar
- quantile
- nanquantile.scalar
- nanquantile
- sort
- sort.stable
- sort.dimname
- sort.dimname_stable
- msort
- topk
- pow.Tensor_Tensor
- pow.Scalar
- pow.Tensor_Scalar
- float_power.Tensor_Tensor - [Port
float_powerkernel to structured kernels #60855] - float_power.Scalar - [Port
float_powerkernel to structured kernels #60855] - float_power.Tensor_Scalar = [Port
float_powerkernel to structured kernels #60855] - normal.Tensor_float
- normal.float_Tensor
- normal.Tensor_Tensor
- _cumsum
- _cumprod
- _cat
- _mode
- bucketize.Tensor
- searchsorted.Tensor
- mse_loss
- mse_loss_backward
- l1_loss
- l1_loss_backward
- multi_margin_loss
- multi_margin_loss_backward
- multilabel_margin_loss
- multilabel_margin_loss_forward
- multilabel_margin_loss_backward
- nll_loss
- nll_loss_forward
- nll_loss_backward
- nll_loss2d
- nll_loss2d_forward
- nll_loss2d_backward
- smooth_l1_loss (Port smooth_l1_loss to structured kernels #67404)
- smooth_l1_loss_backward
- huber_loss
- huber_loss_backward
- soft_margin_loss
- soft_margin_loss_backward
- elu (
elu: port to structured #57619) - glu
- glu_backward
- hardsigmoid (
hardsigmoid: port to structured #57622) - hardtanh
- hardtanh_backward
- hardswish (this one is complicated, see Converting hardswish to strucutred kernels with metatensor support #66899 )
- leaky_relu (
leaky_relu: port to structured #57621) - log_sigmoid
- log_sigmoid_forward
- log_sigmoid_backward
- rrelu_with_noise
- softplus (
softplus: port to structured #57620) - softplus_backward (Port softplus_backward to structured #58482)
- softshrink (
softshrink: port to structured #57623) - softshrink_backward
- (XLA) adaptive_avg_pool2d (this one is a little tricky because there is specialized derivative handling; see ngimel)
- adaptive_avg_pool3d
- adaptive_avg_pool3d_backward
- adaptive_max_pool2d (
adaptive_max_pool2d: port to structured kernel #56317) - adaptive_max_pool2d_backward (
adaptive_max_pool2d_backward: port to structured kernel #56799) - adaptive_max_pool3d (
adaptive_max_pool3d: port to structured kernel #56320) - adaptive_max_pool3d_backward (Port adaptive_max_pool3d_backward to structured kernel #56800)
- avg_pool2d (avg_pool2d: port to structured #58987)
- avg_pool2d_backward
- avg_pool3d
- avg_pool3d_backward (avg_pool3d_backward: Port to structured #59084)
- fractional_max_pool2d
- fractional_max_pool2d_backward
- fractional_max_pool3d
- fractional_max_pool3d_backward
- max_pool2d_with_indices
- max_pool2d_with_indices_backward
- max_pool3d_with_indices
- max_pool3d_with_indices_backward
- max_unpool2d
- max_unpool2d_backward
- max_unpool3d
- max_unpool3d_backward
- reflection_pad1d -
reflection_pad1: port to structured kernel #55531 - reflection_pad1d_backward -
reflection_pad1d_backward: Port to structured #59103 - reflection_pad2d
- reflection_pad2d_backward
- replication_pad1d -
replication_padding1d: port to structured #55481 - replication_pad1d_backward - ezyang
- replication_pad2d - asuhan
- replication_pad2d_backward
- replication_pad3d - ezyang
- replication_pad3d_backward
- upsample_linear1d
- upsample_linear1d_backward
- upsample_bilinear2d
- upsample_bilinear2d_backward
- upsample_bicubic2d
- upsample_bicubic2d_backward
- upsample_trilinear3d
- upsample_trilinear3d_backward
- upsample_nearest1d
- upsample_nearest1d_backward
- upsample_nearest2d
- upsample_nearest2d_backward
- upsample_nearest3d
- upsample_nearest3d_backward
- sigmoid_backward - [sigmoid_backward: Port to structured #60815]
- logit_backward - [logit_backward: Port to structured #60817]
- tanh_backward - [tanh_backward: Port to structured #60816]
- slow_conv_transpose2d - [Port
slow_conv_transpose2dto structured #55503, but blocked onOptional[Tensor]support] - slow_conv_transpose3d
- thnn_conv2d
- thnn_conv2d_forward
- thnn_conv_depthwise2d
- thnn_conv_depthwise2d_forward
- slow_conv3d
- slow_conv3d_forward
- col2im
- col2im_backward
- column_stack
- im2col
- im2col_backward
- isposinf (Port
isposinf&isneginfkernel to structured kernels #60633) - isneginf (Port
isposinf&isneginfkernel to structured kernels #60633) - special_entr Port all unary float functions to structured #56082
- special_gammaln
- special_erf
- special_erfc
- special_erfinv
- fft_fft
- fft_ifft
- fft_rfft
- fft_irfft
- fft_hfft
- fft_ihfft
- fft_fft2
- fft_ifft2
- fft_rfft2
- fft_irfft2
- fft_fftn
- fft_ifftn
- fft_rfftn
- fft_irfftn
- linalg_cholesky
- linalg_slogdet
- linalg_eigh
- linalg_householder_product
- linalg_inv
- inner
- outer
- ger
- linalg_norm
- linalg_norm.ord_str
- linalg_vector_norm
- linalg_svd
- linalg_cond
- linalg_cond.p_str
- linalg_pinv
- linalg_pinv.rcond_tensor
- linalg_solve
- linalg_tensorinv
- linalg_tensorsolve
- linalg_qr
- linalg_matrix_power
- linalg_matrix_rank
Things that require adding an out kernel and then can be made structured:
- _embedding_bag (high priority)
Issues marked (XLA) are supported by XLA and thus are higher priority to port to structured.
Functions with 1d/2d/3d in their name tend to be easier and more beginner friendly
Some of these functions may not be immediately portable. Common grounds for disqualification:
- It's a reduction (blocked on Don't allocate result Tensors in out overloads: Reduction Ops #53218)
- It's multi output (need to implement this)
- It's still a TH kernel (need to do the TH to ATen port first)
- It's an alias/composite kernel (requires design from Dispatch-less structured wrapper / composite / alias kernels #50953)
- It behaves differently in CPU and CUDA; you'll have to fix this bug first! (Most common situation is the memory format behavior is divergent)
- It has optional tensors