Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2013, Journal of the ACM
…
39 pages
1 file
We prove that multilinear (tensor) analogues of many efficiently computable problems in numerical linear algebra are NP-hard. Our list includes: determining the feasibility of a system of bilinear equations, deciding whether a 3-tensor possesses a given eigenvalue, singular value, or spectral norm; approximating an eigenvalue, eigenvector, singular vector, or the spectral norm; and determining the rank or best rank-1 approximation of a 3-tensor. Furthermore, we show that restricting these problems to symmetric tensors does not alleviate their NP-hardness. We also explain how deciding nonnegative definiteness of a symmetric 4-tensor is NP-hard and how computing the combinatorial hyperdeterminant is NP-, #P-, and VNP-hard.
Journal of …, 2009
This work was originally motivated by a classification of tensors proposed by Richard Harshman. In particular, we focus on simple and multiple "bottlenecks", and on "swamps". Existing theoretical results are surveyed, some numerical algorithms are described in details, and their numerical complexity is calculated. In particular, the interest in using the ELS enhancement in these algorithms is discussed. Computer simulations feed this discussion.
We propose a constructive algorithm that decomposes an arbitrary real tensor into a finite sum of orthonormal rank-1 outer products. The algorithm, named TTr1SVD, works by converting the tensor into a tensor-train rank-1 (TTr1) series via the singular value decomposition (SVD). TTr1SVD naturally generalizes the SVD to the tensor regime with properties such as uniqueness for a fixed order of indices, orthogonal rank-1 outer product terms, and easy truncation error quantification. Using an outer product column table it also allows, for the first time, a complete characterization of all tensors orthogonal with the original tensor. Incidentally, this leads to a strikingly simple constructive proof showing that the maximum rank of a real $2 \times 2 \times 2$ tensor over the real field is 3. We also derive a conversion of the TTr1 decomposition into a Tucker decomposition with a sparse core tensor. Numerical examples illustrate each of the favorable properties of the TTr1 decomposition.
Siam Journal on Matrix Analysis and Applications, 2008
There has been continued interest in seeking a theorem describing optimal low-rank approximations to tensors of order 3 or higher, that parallels the Eckart-Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rank-r approximations. The phenomenon is much more widespread than one might suspect: examples of this failure can be constructed over a wide range of dimensions, orders and ranks, regardless of the choice of norm (or even Brègman divergence). Moreover, we show that in many instances these counterexamples have positive volume: they cannot be regarded as isolated phenomena. In one extreme case, we exhibit a tensor space in which no rank-3 tensor has an optimal rank-2 approximation. The notable exceptions to this misbehavior are rank-1 tensors and order-2 tensors (i.e. matrices).
arXiv (Cornell University), 2011
In this paper we suggest a new algorithm for the computation of a best rank one approximation of tensors, called alternating singular value decomposition. This method is based on the computation of maximal singular values and the corresponding singular vectors of matrices. We also introduce a modification for this method and the alternating least squares method, which ensures that alternating iterations will always converge to a semi-maximal point. (A critical point in several vector variables is semi-maximal if it is maximal with respect to each vector variable, while other vector variables are kept fixed.) We present several numerical examples that illustrate the computational performance of the new method in comparison to the alternating least square method.
2014
In many applications such as data compression, imaging or genomic data analysis, it is important to approximate a given tensor by a tensor that is sparsely representable. For matrices, i.e. 2-tensors, such a representation can be obtained via the singular value decomposition, which allows to compute best rank k-approximations. For very big matrices a low rank approximation using SVD is not computationally feasible. In this case different approximations are available. It seems that variants of CUR decomposition are most suitable. For d-mode tensors T with d>2, many generalizations of the singular value decomposition have been proposed to obtain low tensor rank decompositions. The most appropriate approximation seems to be best (r_1,...,r_d)-approximation, which maximizes the l_2 norm of the projection of T on a tensor product of subspaces U_1,...,U_d, where U_i is an r_i-dimensional subspace. One of the most common method is the alternating maximization method (AMM). It is obtaine...
2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 2017
Analysis of multidimensional arrays, usually called tensors, often becomes difficult in cases when the tensor rank (a minimum number of rank-one components) exceeds all the tensor dimensions. Traditional methods of canonical polyadic decomposition of such tensors, namely the alternating least squares, can be used, but a presence of a large number of false local minima can make the problem hard. Usually, multiple random initializations are advised in such cases, but the question is how many such random initializations are sufficient to get a good chance of finding the right solution. It appears that the number of the initializations can be very large. We propose a novel approach to the problem. The given tensor is augmented by some unknown parameters to the shape that admits ordinary tensor diagonalization, i.e., transforming the augmented tensor into an exact or nearly diagonal form through multiplying the tensor by non-orthogonal invertible matrices. Three possible constraints are ...
HAL (Le Centre pour la Communication Scientifique Directe), 2017
The canonical polyadic decomposition is one of the most used tensor decomposition. However classical decomposition algorithms such as alternating least squares suffer from convergence problems and thus the decomposition of large tensors can be very time consuming. Recently it has been shown that the decomposition can be rewritten as a joint eigenvalue decomposition problem. In this paper we propose a fast joint eigenvalue decomposition algorithm then we show how it can benefit the canonical polyadic decomposition of large tensors. Abstract La decomposizione canonica di tensoriè usata in diversi campi tra cui quello dei data sciences. Tuttavia, in classici algoritmi di decomposizione, come l'alternating least squares, si possono riscontrare problemi di convergenza e proprio per questo motivo che la decomposizione di grandi tensori puo essere molto dispendiosa in termini di tempo di calcolo. Recentemente, sono stati sviluppati algoritmi di decomposizione canonica veloci, basati sulla diagonalizzazione di un insieme di matrici su una base comune di autovettori. In questo articolo proponiamo un algoritmo originale per risolvere quest'ultimo problema. In seguito mettiamo in evidenza l'aspetto più interessante di questo approccio al fine di effettuare la decomposizione canonica di grandi tensori.
arXiv (Cornell University), 2019
In this era of big data, data analytics and machine learning, it is imperative to find ways to compress large data sets such that intrinsic features necessary for subsequent analysis are not lost. The traditional workhorse for data dimensionality reduction and feature extraction has been the matrix SVD, which presupposes that the data has been arranged in matrix format. Our main goal in this study is to show that high-dimensional data sets are more compressible when treated as tensors (aka multiway arrays) and compressed via tensor-SVDs under the tensor-tensor product structures in [13, 11]. We begin by proving Eckart Young optimality results for families of tensor-SVDs under two different truncation strategies. As such optimality properties can be proven in both matrix and tensor-based algebras, a fundamental question arises: does the tensor construct subsume the matrix construct in terms of representation efficiency? The answer is yes, as shown when we prove that a tensor-tensor representation of an equal dimensional spanning space can be superior to its matrix counterpart. We then investigate how the compressed representation provided by the truncated tensor-SVD is related both theoretically and in compression performance to its closest tensor-based analogue, truncated HOSVD [2, 3], thereby showing the potential advantages of our tensor-based algorithms. Finally, we propose new tensor truncated SVD variants, namely multi-way tensor SVDs, provide further approximated representation efficiency and discuss under which conditions they are considered optimal. We conclude with a numerical study demonstrating the utility of the theory. 1. Significance. Much of real-world data is inherently multidimensional, often involving high-dimensional correlations. However, many data analysis pipelines process data as two-dimensional arrays (i.e., matrices) even if the data is naturally represented in high-dimensional. The common practice of matricizing high-dimensional data is due to the ubiquitousness and strong theoretical foundations of matrix algebra. Over the last century, dating back to 1927 [10] with the introduction of the canonical (CP) decomposition, various tensor-based approximation techniques have been developed. These high-dimensional techniques have demonstrated to be instrumental in a broad range of application area, yet, hitherto, none have been theoretically proven to outperform matricization in general settings. This lack of matrix-mimetic properties and theoretical guarantees has been impeding adaptation of tensor-based techniques as viable mainstream data analysis alternatives. In this study, we propose preserving data in a native, tensor-based format while processing it using new matrix-mimetic, tensor-algebraic formulations. Considering a general family of tensor algebras, we prove an Eckart-Young optimality theorem for truncated tensor representations. Perhaps more significantly, we prove these tensorbased reductions are superior to traditional matrix-based representations. Such results distinguish the proposed approach from other tensor-based approaches. We believe this work will lead to revolutionary new ways in which data with high-dimensional correlations are treated. 2. Introduction.
Numerical Linear Algebra with Applications, 2017
SummaryWe generalize the matrix Kronecker product to tensors and propose the tensor Kronecker product singular value decomposition that decomposes a real k‐way tensor into a linear combination of tensor Kronecker products with an arbitrary number of d factors. We show how to construct , where each factor is also a k‐way tensor, thus including matrices (k=2) as a special case. This problem is readily solved by reshaping and permuting into a d‐way tensor, followed by a orthogonal polyadic decomposition. Moreover, we introduce the new notion of general symmetric tensors (encompassing symmetric, persymmetric, centrosymmetric, Toeplitz and Hankel tensors, etc.) and prove that when is structured then its factors will also inherit this structure.
SIAM Journal on Optimization, 2013
The canonical polyadic and rank-(Lr, Lr, 1) block term decomposition (CPD and BTD, respectively) are two closely related tensor decompositions. The CPD and, recently, BTD are important tools in psychometrics, chemometrics, neuroscience, and signal processing. We present a decomposition that generalizes these two and develop algorithms for its computation. Among these algorithms are alternating least squares schemes, several general unconstrained optimization techniques, and matrix-free nonlinear least squares methods. In the latter we exploit the structure of the Jacobian's Gramian to reduce computational and memory cost. Combined with an effective preconditioner, numerical experiments confirm that these methods are among the most efficient and robust currently available for computing the CPD, rank-(Lr, Lr, 1) BTD, and their generalized decomposition.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Communications on Applied Mathematics and Computation
Numerical Linear Algebra with Applications
Foundations of Computational Mathematics, 2014
Constructive Approximation, 2015
arXiv: Numerical Analysis, 2017
Foundations and Trends® in Machine Learning, 2016
SIAM Journal on Matrix Analysis and Applications, 2009
2012 IEEE Conference on High Performance Extreme Computing, 2012