Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2011
Operations with tensors, or multiway arrays, have become increasingly prevalent in recent years. Traditionally, tensors are represented or decomposed as a sum of rank-1 outer products using either the CANDECOMP/PARAFAC (CP) or the Tucker models, or some variation thereof. Such decompositions are motivated by specific applications where the goal is to find an approximate such representation for a given multiway array. The specifics of the approximate representation (such as how many terms to use in the sum, orthogonality constraints, etc.) depend on the application.
2010
Tensor decompositions permit to estimate in a deterministic way the parameters in a multi-linear model. Applications have been already pointed out in antenna array processing and digital communications, among others, and are extremely attractive provided some diversity at the receiver is available. As opposed to the widely used ALS algorithm, non-iterative algorithms are proposed in this paper to compute the required tensor decomposition into a sum of rank-1 terms, when some factor matrices enjoy some structure, such as block-Hankel, triangular, band, etc.
SIAM Journal on Matrix Analysis and Applications, 2013
Recent work by Kilmer and Martin, [10] and Braman [2] provides a setting in which the familiar tools of linear algebra can be extended to better understand third-order tensors. Continuing along this vein, this paper investigates further implications including: 1) a bilinear operator on the matrices which is nearly an inner product and which leads to definitions for length of matrices, angle between two matrices and orthogonality of matrices and 2) the use of t-linear combinations to characterize the range and kernel of a mapping defined by a third-order tensor and the t-product and the quantification of the dimensions of those sets. These theoretical results lead to the study of orthogonal projections as well as an effective Gram-Schmidt process for producing an orthogonal basis of matrices. The theoretical framework also leads us to consider the notion of tensor polynomials and their relation to tensor eigentuples defined in [2]. Implications for extending basic algorithms such as the power method, QR iteration, and Krylov subspace methods are discussed. We conclude with two examples in image processing: using the orthogonal elements generated via a Golub-Kahan iterative bidiagonalization scheme for facial recognition and solving a regularized image deblurring problem.
SIAM Journal on Matrix Analysis and Applications, 2008
This paper presents some recent filtering methods based on the lower-rank tensor approximation approach for denoising tensor signals. In this approach, multicomponent data are represented by tensors, that is, multiway arrays, and the presented tensor filtering methods rely on multilinear algebra. First, the classical channel-by-channel SVD-based filtering method is overviewed. Then, an extension of the classical matrix filtering method is presented. It is based on the lower rank-(K 1 ,. .. , K N) truncation of the higher order SVD which performs a multimode principal component analysis (PCA) and is implicitly developed for an additive white Gaussian noise. Two tensor filtering methods recently developed by the authors are also overviewed. The first method consists of an improvement of the multimode PCA-based tensor filtering in the case of an additive correlated Gaussian noise. This improvement is specially done thanks to the fourth order cumulant slice matrix. The second method consists of an extension of Wiener filtering for data tensors. The performances and comparative results between all these tensor filtering methods are presented for the cases of noise reduction in color images, multispectral images, and multicomponent seismic data.
IEEE Signal Processing Magazine, 2015
The widespread use of multi-sensor technology and the emergence of big datasets has highlighted the limitations of standard flat-view matrix models and the necessity to move towards more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under very mild and natural conditions. Benefiting from the power of multilinear algebra as their mathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrix-based methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced cause-effect and multi-view data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization.
In this paper we propose novel methods for compression and recovery of multilinear data under limited sampling. We exploit the recently proposed tensor-Singular Value Decomposition (t-SVD) , which is a group theoretic framework for tensor decomposition. In contrast to popular existing tensor decomposition techniques such as higher-order SVD (HOSVD), t-SVD has optimality properties similar to the truncated SVD for matrices. Based on t-SVD, we first construct novel tensor-rank like measures to characterize informational and structural complexity of multilinear data. Following that we outline a complexity penalized algorithm for tensor completion from missing entries. As an application, 3-D and 4-D (color) video data compression and recovery are considered. We show that videos with linear camera motion can be represented more efficiently using t-SVD compared to traditional approaches based on vectorizing or flattening of the tensors. Application of the proposed tensor completion algorithm for video recovery from missing entries is shown to yield a superior performance over existing methods. In conclusion we point out several research directions and implications to online prediction of multilinear data.
IEEE Journal of Selected Topics in Signal Processing
, NJ. His research interests include computer vision, deep learning, manifold learning, online learning, and image enhancement with commercial applications in smartphones, AR/VR, autonomous vehicles, video surveillance, defense, and medical systems. He was the recipient of the R&D 100 Scientist of the Year Award in 2006, won six best paper awards at scientific events, and recognized with six professional prizes at his industrial appointments. He has authored more than 300 publications, coedited two books, and invented 80 US patents.
Tensor decompositions, including CANDECOMP/PARAFAC decomposition (CPD), Tucker decomposition (TKD), and tensor train decompositions (TTD), are extensions of singular value decomposition (SVD) for matrices. They are frameworks to decompose images or videos data into bases and coefficients. Due to recent developments in artificial intelligence (AI), tensor decomposition techniques are becoming increasingly important due to its compact representation, fast access , and easy reconstruction. However, tensor decompositions are still challenging in both computations and interpretations because CPD lacks orthogonality, TKD lacks sparsity, and TTD lacks both orthogonality and sparsity. To understand these issues, we evaluate their theoretical and practical limitations induced by the lack of orthogonality and sparsity in existing tensor decomposition methods. To overcome these limitations, a tensor decomposition method with both orthogo-nality and sparsity is proposed. Due to the two properti...
arXiv (Cornell University), 2019
In this era of big data, data analytics and machine learning, it is imperative to find ways to compress large data sets such that intrinsic features necessary for subsequent analysis are not lost. The traditional workhorse for data dimensionality reduction and feature extraction has been the matrix SVD, which presupposes that the data has been arranged in matrix format. Our main goal in this study is to show that high-dimensional data sets are more compressible when treated as tensors (aka multiway arrays) and compressed via tensor-SVDs under the tensor-tensor product structures in [13, 11]. We begin by proving Eckart Young optimality results for families of tensor-SVDs under two different truncation strategies. As such optimality properties can be proven in both matrix and tensor-based algebras, a fundamental question arises: does the tensor construct subsume the matrix construct in terms of representation efficiency? The answer is yes, as shown when we prove that a tensor-tensor representation of an equal dimensional spanning space can be superior to its matrix counterpart. We then investigate how the compressed representation provided by the truncated tensor-SVD is related both theoretically and in compression performance to its closest tensor-based analogue, truncated HOSVD [2, 3], thereby showing the potential advantages of our tensor-based algorithms. Finally, we propose new tensor truncated SVD variants, namely multi-way tensor SVDs, provide further approximated representation efficiency and discuss under which conditions they are considered optimal. We conclude with a numerical study demonstrating the utility of the theory. 1. Significance. Much of real-world data is inherently multidimensional, often involving high-dimensional correlations. However, many data analysis pipelines process data as two-dimensional arrays (i.e., matrices) even if the data is naturally represented in high-dimensional. The common practice of matricizing high-dimensional data is due to the ubiquitousness and strong theoretical foundations of matrix algebra. Over the last century, dating back to 1927 [10] with the introduction of the canonical (CP) decomposition, various tensor-based approximation techniques have been developed. These high-dimensional techniques have demonstrated to be instrumental in a broad range of application area, yet, hitherto, none have been theoretically proven to outperform matricization in general settings. This lack of matrix-mimetic properties and theoretical guarantees has been impeding adaptation of tensor-based techniques as viable mainstream data analysis alternatives. In this study, we propose preserving data in a native, tensor-based format while processing it using new matrix-mimetic, tensor-algebraic formulations. Considering a general family of tensor algebras, we prove an Eckart-Young optimality theorem for truncated tensor representations. Perhaps more significantly, we prove these tensorbased reductions are superior to traditional matrix-based representations. Such results distinguish the proposed approach from other tensor-based approaches. We believe this work will lead to revolutionary new ways in which data with high-dimensional correlations are treated. 2. Introduction.
2014
In many applications such as data compression, imaging or genomic data analysis, it is important to approximate a given tensor by a tensor that is sparsely representable. For matrices, i.e. 2-tensors, such a representation can be obtained via the singular value decomposition, which allows to compute best rank k-approximations. For very big matrices a low rank approximation using SVD is not computationally feasible. In this case different approximations are available. It seems that variants of CUR decomposition are most suitable. For d-mode tensors T with d>2, many generalizations of the singular value decomposition have been proposed to obtain low tensor rank decompositions. The most appropriate approximation seems to be best (r_1,...,r_d)-approximation, which maximizes the l_2 norm of the projection of T on a tensor product of subspaces U_1,...,U_d, where U_i is an r_i-dimensional subspace. One of the most common method is the alternating maximization method (AMM). It is obtaine...
Journal of …, 2009
This work was originally motivated by a classification of tensors proposed by Richard Harshman. In particular, we focus on simple and multiple "bottlenecks", and on "swamps". Existing theoretical results are surveyed, some numerical algorithms are described in details, and their numerical complexity is calculated. In particular, the interest in using the ELS enhancement in these algorithms is discussed. Computer simulations feed this discussion.
2019
Modeling of multidimensional signal using tensor is more convincing than representing it as a collection of matrices. The tensor based approaches can explore the abundant spatial and temporal structures of the mutlidimensional signal. The backbone of this modeling is the mathematical foundations of tensor algebra. The linear transform based tensor algebra furnishes low complex and high performance algebraic structures suitable for the introspection of the multidimensional signal. A comprehensive introduction of the linear transform based tensor algebra is provided from the signal processing viewpoint. The rank of a multidimensional signal is a precious property which gives an insight into the structural aspects of it. All natural multidimensional signals can be approximated to a low rank signal without losing significant information. The low rank approximation is beneficial in many signal processing applications such as denoising, missing sample estimation, resolution enhancement, c...
Machine Learning, 2019
In recent studies, tensor ring decomposition (TRD) has become a promising model for tensor completion. However, TRD suffers from the rank selection problem due to the undetermined multilinear rank. For tensor decomposition with missing entries, the sub-optimal rank selection of traditional methods leads to the overfitting/underfitting problem. In this paper, we first explore the latent space of the TRD and theoretically prove the relationship between the TR-rank and the rank of the tensor unfoldings. Then, we propose two tensor completion models by imposing the different low-rank regularizations on the TR-factors, by which the TR-rank of the underlying tensor is minimized and the low-rank structures of the underlying tensor are exploited. By employing the alternating direction method of multipliers scheme, our algorithms obtain the TR factors and the underlying tensor simultaneously. In experiments of tensor completion tasks, our algorithms show robustness to rank selection and high computation efficiency, in comparison to traditional low-rank approximation algorithms.
Are there analogues to the SVD, LU, QR, and other matrix decompositions for tensors (i.e., higher-order or multiway arrays)? What exactly do we mean by "analogue," anyway? If such decompositions exist, are there efficient ways to compute them?
2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 2017
Analysis of multidimensional arrays, usually called tensors, often becomes difficult in cases when the tensor rank (a minimum number of rank-one components) exceeds all the tensor dimensions. Traditional methods of canonical polyadic decomposition of such tensors, namely the alternating least squares, can be used, but a presence of a large number of false local minima can make the problem hard. Usually, multiple random initializations are advised in such cases, but the question is how many such random initializations are sufficient to get a good chance of finding the right solution. It appears that the number of the initializations can be very large. We propose a novel approach to the problem. The given tensor is augmented by some unknown parameters to the shape that admits ordinary tensor diagonalization, i.e., transforming the augmented tensor into an exact or nearly diagonal form through multiplying the tensor by non-orthogonal invertible matrices. Three possible constraints are ...
Foundations and Trends® in Machine Learning, 2016
Modern applications in engineering and data science are increasingly based on multidimensional data of exceedingly high volume, variety, and structural richness. However, standard machine learning algorithms typically scale exponentially with data volume and complexity of cross-modal couplings-the so called curse of dimensionalitywhich is prohibitive to the analysis of large-scale, multi-modal and multi-relational datasets. Given that such data are often efficiently represented as multiway arrays or tensors, it is therefore timely and valuable for the multidisciplinary machine learning and data analytic communities to review low-rank tensor decompositions and tensor networks as emerging tools for dimensionality reduction and large scale optimization problems. Our particular emphasis is on elucidating that, by virtue of the underlying low-rank approximations, tensor networks have the ability to alleviate the curse of dimensionality in a number of applied areas. In Part 1 of this monograph we provide innovative solutions to low-rank tensor network decompositions and easy to interpret graphical representations of the mathematical operations on tensor networks. Such a conceptual insight allows for seamless migration of ideas from the flat-view matrices to tensor network operations and vice versa, and provides a platform for further developments, practical applications, and non-Euclidean extensions. It also permits the introduction of various tensor network operations without an explicit notion of mathematical expressions, which may be beneficial for many research communities that do not directly rely on multilinear algebra. Our focus is on the Tucker and tensor train (TT) decompositions and their extensions, and on demonstrating the ability of tensor networks to provide linearly or even super-linearly (e.g., logarithmically) scalable solutions, as illustrated in detail in Part 2 of this monograph.
We propose a constructive algorithm that decomposes an arbitrary real tensor into a finite sum of orthonormal rank-1 outer products. The algorithm, named TTr1SVD, works by converting the tensor into a tensor-train rank-1 (TTr1) series via the singular value decomposition (SVD). TTr1SVD naturally generalizes the SVD to the tensor regime with properties such as uniqueness for a fixed order of indices, orthogonal rank-1 outer product terms, and easy truncation error quantification. Using an outer product column table it also allows, for the first time, a complete characterization of all tensors orthogonal with the original tensor. Incidentally, this leads to a strikingly simple constructive proof showing that the maximum rank of a real $2 \times 2 \times 2$ tensor over the real field is 3. We also derive a conversion of the TTr1 decomposition into a Tucker decomposition with a sparse core tensor. Numerical examples illustrate each of the favorable properties of the TTr1 decomposition.
ArXiv, 2021
Low rank tensor approximation is a fundamental tool in modern machine learning and data science. In this paper, we study the characterization, perturbation analysis, and an efficient sampling strategy for two primary tensor CUR approximations, namely Chidori and Fiber CUR. We characterize exact tensor CUR decompositions for low multilinear rank tensors. We also present theoretical error bounds of the tensor CUR approximations when (adversarial or Gaussian) noise appears. Moreover, we show that low cost uniform sampling is sufficient for tensor CUR approximations if the tensor has an incoherent structure. Empirical performance evaluations, with both synthetic and real-world datasets, establish the speed advantage of the tensor CUR approximations over other state-of-the-art low multilinear rank tensor approximations.
2000
Traditionally, extending the Singular Value Decomposition (SVD) to third-order tensors (multiway arrays) has involved a representation using the outer product of vectors. These outer products can be written in terms of the n-mode product, which can also be used to describe a type of multiplication between two tensors. In this paper, we present a different type of third-order generalization of
IEEE Transactions on Signal Processing, 2000
In general, algorithms for order-3 CANDECOMP/PARAFAC (CP), also coined canonical polyadic decomposition (CPD), are easily to implement and can be extended to higher order CPD. Unfortunately, the algorithms become computationally demanding, and they are often not applicable to higher order and relatively large scale tensors. In this paper, by exploiting the uniqueness of CPD and the relation of a tensor in Kruskal form and its unfolded tensor, we propose a fast approach to deal with this problem.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.