In NumPy it accepts `*operands`, in PyTorch it accepts `operands`. https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.einsum.html: ```python numpy.einsum(subscripts, *operands, ...) ``` https://pytorch.org/docs/master/torch.html?highlight=einsum#torch.einsum: ```python torch.einsum(equation, operands) ``` https://www.tensorflow.org/api_docs/python/tf/einsum: ```python tf.einsum(equation, *inputs, **kwargs) ``` Are there good reasons for keeping it different from NumPy / TensorFlow?