Add specifications for array manipulation functions#42
Merged
Conversation
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR
Notes
This list of array manipulation functions is an initial set of array manipulation functions which can pave the way for additional specs in subsequent pull requests. These functions were identified as the set of functions with the broadest support among array libraries and relatively higher usage among downstream libraries.
Some comments/questions regarding particular APIs...
concat: CuPy requires a tuple rather than a sequence for first argument. Went with tuple as more consistent with rest of specification (e.g., we require a list of axes to be specified as a tuple, but not a sequence). What happens if provided arrays having different dtypes? What should we the dtype of the returned array? How do type promotion rules factor in here?
expand_dims: NumPy supports providing a tuple or an
int. All other array libraries considered support only anint. Torch names this methodunsqueeze. Went withexpand_dimsand only accepting anintfor the second positional argument.flip: TensorFlow lacks this exact API. Torch/CuPy/ spec
axis/dimsas position argument. Based proposal on NumPy whereaxisis a keyword argument, as more versatile.reshape: Torch requires a tuple (does not allow
int). TensorFlow requires shape to be aint32/int64tensor. NumPy allows providing anintas shorthand. Based proposal on Torch's more restricted API for consistency.roll: TensorFlow requires tensors for axis and shifts.
squeeze: Torch only allows specifying one axis and does not error if you attempt to squeeze non-singleton dimensions. NumPy/TensorFlow error if you attempt to squeeze a dimension which is not
1. Sided with Torch regarding error behavior, as not clear why attempting to squeeze a non-singleton dimension should error.stack: CuPy requires a tuple rather than a sequence for the first argument. Went with tuple for same reasons as in
concat. Samedtypequestion(s) apply as forconcatabove.