Total Least Squares by Martin Plešinger

Linear Algebra and its Applications, 2019
The total least squares (TLS) framework represents a popular data fitting approach for solving ma... more The total least squares (TLS) framework represents a popular data fitting approach for solving matrix approximation problems of the form $\tA(X)\equiv AX\approx B$. A general linear mapping on spaces of matrices $\tA:X\longmapsto B$ can be represented by a fourth-order tensor which is in the $AX\approx B$ case highly structured. This has a direct impact on solvability of the corresponding TLS problem, which is known to be complicated. Thus this paper focuses on several generalizations of the model $\tA$: the bilinear model, the model of higher Kronecker rank, and the fully tensorized model. It is shown how the corresponding generalization of the TLS formulation induces enrichment of the search space for the data corrections. Solvability of the resulting minimization problem is studied. Furthermore, extension of the so-called core reduction to the bilinear model is presented. For the fully tensor model, its relation to a particular single right-hand side TLS problem is derived. Relationships among individual formulations are discussed.

Applications of Mathematics, 2019
Linear matrix approximation problems $AX\approx B$ are often solved by the total least squares m... more Linear matrix approximation problems $AX\approx B$ are often solved by the total least squares minimization (TLS). Unfortunately, the TLS solution may not exist in general. The so-called core problem theory brought an insight into this effect. Moreover, it simplified the solvability analysis if $B$ is of column rank one by extracting a core problem having always a unique TLS solution. However, if the rank of $B$ is larger, the core problem may stay unsolvable in the TLS sense, as shown for the first time by Hnětynková, Plešinger, and Sima (2016). Full classification of core problems with respect to their solvability is still missing. Here we fill this gap. Then we concentrate on the so-called composed (or reducible) core problems that can be represented by a composition of several smaller core problems. We analyze how the solvability class of the components influences the solvability class of the composed problem. We also show on an example that the TLS solvability class of a core problem may be in some sense improved by its composition with a suitably chosen component. The existence of irreducible problems in various solvability classes is discussed.

The total least squares (TLS) represents a popular data fitting approach for solving linear appro... more The total least squares (TLS) represents a popular data fitting approach for solving linear approximation problems $Ax\approx b$ (i.e., with a vector right-hand side) and $AX\approx B$ (i.e., with a matrix right-hand side) contaminated by errors. This paper introduces a generalization of TLS formulation to problems with structured right-hand sides. First, we focus on the case, where the right-hand side and consequently also the solution are tensors. We show that whereas the basic solvability result can be obtained directly by matricization of both tensors, generalization of the core problem reduction is more complicated. The core reduction allows to reduce mathematically the problem dimensions by removing all redundant and irrelevant data from the system matrix and the right-hand side. We prove that the core problems within the original tensor problem and its matricized counterpart are in general different. Then, we concentrate on problems with even more structured right-hand sides, where the same model A corresponds to a set of various tensor right-hand sides. Finally, relations between the matrix and tensor core problem are discussed.

Recently it was shown how necessary and sufficient information for solving an orthogonally invar... more Recently it was shown how necessary and sufficient information for solving an orthogonally invariant linear approximation problem $AX\approx B$ with multiple right-hand sides can be revealed through the so-called core problem reduction; see [I. Hnětynková et al., SIAM J. Matrix Anal. Appl., 34, 2013, pp. 917--931]. The total least squares (TLS) serves as an important example of such approximation problem. Solvability of TLS was discussed in the full generality in [I. Hnětynková et al., SIAM J. Matrix Anal. Appl., 32, 2011, pp. 748--770]. This theoretical study investigates solvability of core problems with multiple right-hand sides in the TLS sense. It is shown that contrary to the single right-hand side case, a core problem with multiple right-hand sides may not have a TLS solution. Further possible internal structure of core problems is studied. Outputs of the classical TLS algorithm for the original problem $AX\approx B$ and for the core problem within $AX\approx B$ are compared.

The concept of the core problem in total least squares (TLS) problems with single right-hand side... more The concept of the core problem in total least squares (TLS) problems with single right-hand side introduced in [C. C. Paige and Z. Strakoš, SIAM J. Matrix Anal. Appl., 27 (2005), pp. 861–875] separates necessary and sufficient information for solving the problem from redundancies and irrelevant information contained in the data. It is based on orthogonal transformations such that the resulting problem decomposes into two independent parts. One of the parts has nonzero right-hand side and minimal dimensions and it always has the unique TLS solution. The other part has trivial (zero) right-hand side and maximal dimensions. Assuming exact arithmetic, the core problem can be obtained by the Golub–Kahan bidiagonalization. Extension of the core concept to the multiple right-hand sides case $AX\approx B$ in [I. Hnětynková, M. Plešinger, and Z. Strakoš, SIAM J. Matrix Anal. Appl., 34 (2013), pp. 917–931], which is highly nontrivial, is based on application of the singular value decomposition. In this paper we prove that the band generalization of the Golub–Kahan bidiagonalization proposed in this context by Björck also yields the core problem. We introduce generalized Jacobi matrices and investigate their properties. They prove useful in further analysis of the core problem concept. This paper assumes exact arithmetic. (© 2015, Society for Industrial and Applied Mathematics)

This paper focuses on total least squares (TLS) problems $AX\approx B$ with multiple right-hand s... more This paper focuses on total least squares (TLS) problems $AX\approx B$ with multiple right-hand sides. Existence and uniqueness of a TLS solution for such problems was analyzed in the paper [I. Hnětynková et al., SIAM J. Matrix Anal. Appl., 32, 2011, pp. 748–770]. For TLS problems with single right-hand sides the paper [C. C. Paige and Z. Strakoš, SIAM J. Matrix Anal. Appl., 27, 2006, pp. 861–875] showed how necessary and sufficient information for solving $Ax\approx b$ can be revealed from the original data through the so-called core problem concept. In this paper we present a theoretical study extending this concept to problems with multiple right-hand sides. The data reduction we present here is based on the singular value decomposition of the system matrix $A$. We show minimality of the reduced problem; in this sense the situation is analogous to the single right-hand side case. Some other properties of the core problem, however, cannot be extended to the case of multiple right-hand sides. (© 2013, Society for Industrial and Applied Mathematics)
This paper revisits the analysis of the total least squares (TLS) problem AX≈B with multiple righ... more This paper revisits the analysis of the total least squares (TLS) problem AX≈B with multiple right-hand sides given by Van Huffel and Vandewalle in the monograph, The Total Least Squares Problem: Computational Aspects and Analysis, SIAM, Philadelphia, 1991. The newly proposed classification is based on properties of the singular value decomposition of the extended matrix [B|A]. It aims at identifying the cases when a TLS solution does or does not exist and when the output computed by the classical TLS algorithm, given by Van Huffel and Vandewalle, is actually a TLS solution. The presented results on existence and uniqueness of the TLS solution reveal subtleties that were not captured in the known literature. (Copyright © 2011 Society for Industrial and Applied Mathematics)
Consider an orthogonally invariant linear approximation problem Ax ≈ b. In [8] it is proved that ... more Consider an orthogonally invariant linear approximation problem Ax ≈ b. In [8] it is proved that the partial upper bidiagonal-ization of the extended matrix [b, A] determines a core approximation problem A11x1 ≈ b1, with all necessary and sufficient information for solving the original ...
Consider a linear approximation problem AX≈B with multiple right–hand sides. When errors in the d... more Consider a linear approximation problem AX≈B with multiple right–hand sides. When errors in the data are confirmed both to B and A, the total least squares (TLS) concept is used to solve this problem. Contrary to the standard least squares approximation problem, a solution of the TLS problem may not exist. For a single (vector) right–hand side, the classical theory has been developed by G.H. Golub, C.F. Van Loan [2], and S. Van Huffel, J. Vandewalle [4], and then complemented recently by the core problem approach of C.C. Paige, Z. Strakoš [5,6,7]. Analysis of the problem with multiple right–hand sides is still under development. In this short contribution we present conditions for the existence of a TLS solution. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)
Regularization by Martin Plešinger

Golub–Kahan iterative bidiagonalization represents the core algorithm in several regularization m... more Golub–Kahan iterative bidiagonalization represents the core algorithm in several regularization methods for solving large linear noise-polluted ill-posed problems. We consider a general noise setting and derive explicit relations between (noise contaminated) bidiagonalization vectors and the residuals of bidiagonalization-based regularization methods LSQR, LSMR, and CRAIG. For LSQR and LSMR residuals we prove that the coefficients of the linear combination of the computed bidiagonalization vectors reflect the amount of propagated noise in each of these vectors. For CRAIG the residual is only a multiple of a particular bidiagonalization vector. We show how its size indicates the regularization effect in each iteration by expressing the CRAIG solution as the exact solution to a modified compatible problem. Validity of the results for larger two-dimensional problems and influence of the loss of orthogonality is also discussed.

The total least squares (TLS) and truncated TLS (T-TLS) methods are widely known linear data fitt... more The total least squares (TLS) and truncated TLS (T-TLS) methods are widely known linear data fitting approaches, often used also in the context of very ill-conditioned, rank-deficient, or ill-posed problems. Regularization properties of T-TLS applied to linear approximation problems Ax ≈ b were analyzed by Fierro, Golub, Hansen, and O’Leary (1997) through the so-called filter factors allowing to represent the solution in terms of a filtered pseudoinverse of A applied to b. This paper focuses on the situation when multiple observations b1,.., bd are available, i.e., the T-TLS method is applied to the problem AX ≈ B, where B = [b1,.., bd] is a matrix. It is proved that the filtering representation of the T-TLS solution can be generalized to this case. The corresponding filter factors are explicitly derived. © 2017, Institute of Mathematics of the Academy of Sciences of the Czech Republic, Praha, Czech Republic.
Image deblurring represents one of important areas of image processing. When information about th... more Image deblurring represents one of important areas of image processing. When information about the amount of noise in the given blurred image is available, it can significantly improve the performance of image deblurring algorithms. The paper [11] introduced an iterative method for estimating the noise level in linear algebraic ill-posed problems contaminated by white noise. Here we study applicability of this approach to image deblurring problems with various types of blurring operators. White as well as data-correlated noise of various sizes is considered.

Regularization techniques based on the Golub-Kahan iterative bidiagonalization belong among popul... more Regularization techniques based on the Golub-Kahan iterative bidiagonalization belong among popular approaches for solving large ill-posed problems. First, the original problem is projected onto a lower dimensional subspace using the bidiagonalization algorithm, which by itself represents a form of regularization by projection. The projected problem, however, inherits a part of the ill-posedness of the original problem, and therefore some form of inner regularization must be applied. Stopping criteria for the whole process are then based on the regularization of the projected (small) problem. In this paper we consider an ill-posed problem with a noisy right-hand side (observation vector), where the noise level is unknown. We show how the information from the Golub-Kahan iterative bidiagonalization can be used for estimating the noise level. Such information can be useful for constructing efficient stopping criteria in solving ill-posed problems.
Other Papers by Martin Plešinger
We study Hermite orthogonal polynomials and Gram matrices of their non-standard inner products. T... more We study Hermite orthogonal polynomials and Gram matrices of their non-standard inner products. The weight function of the non-standard inner product is obtained from the Gauss probability density function by its horizontal shift by a real parameter. We are interested in the spectral properties of these matrices and some of their modifications. We show how the largest and smallest eigenvalues of the matrices depend on the parameter.
The paper by I. Hnětynková et al. (2015) [11] introduces real wedge-shaped matrices that can be s... more The paper by I. Hnětynková et al. (2015) [11] introduces real wedge-shaped matrices that can be seen as a generalization of Jacobi matrices, and investigates their basic properties. They are used in the analysis of the behavior of a Krylov subspace method: The band (or block) generalization of the Golub–Kahan bidiagonalization. Wedge-shaped matrices can be linked also to the band (or block) Lanczos method. In this paper, we introduce a complex generalization of wedge-shaped matrices and show some further spectral properties, complementing the already known ones. We focus in particular on nonzero components of eigenvectors.

Numerical Linear Algebra with Applications, 2013
This paper is concerned with the numerical solution of symmetric large-scale Lyapunov equations w... more This paper is concerned with the numerical solution of symmetric large-scale Lyapunov equations with low-rank right-hand sides and coefficient matrices depending on one or several parameters. Specifically, we consider the situation when the parameter dependence is sufficiently smooth and the aim is to compute solutions for many different parameter samples. Based on existing results for Lyapunov equations and parameter-dependent linear systems, we prove that the tensor containing all solution samples typically allows for an excellent low multilinear rank approximation. Stacking all sampled equations into one huge linear system, this fact can be exploited by combining the preconditioned CG method with low-rank truncation. Our approach is flexible enough to allow for a variety of preconditioners based, for example, on the sign function iteration or the ADI method.

This paper is concerned with the numerical solution of symmetric large-scale Lyapunov equations w... more This paper is concerned with the numerical solution of symmetric large-scale Lyapunov equations with low-rank right-hand sides and coefficient matrices depending on a parameter. Specifically, we consider the situation when the parameter dependence is sufficiently smooth, and the aim is to compute solutions for many different parameter samples. On the basis of existing results for Lyapunov equations and parameter-dependent linear systems, we prove that the tensor containing all solution samples typically allows for an excellent low multilinear rank approximation. Stacking all sampled equations into one huge linear system, this fact can be exploited by combining the preconditioned CG method with low-rank truncation. Our approach is flexible enough to allow for a variety of preconditioners based, for example, on the sign function iteration or the alternating direction implicit method. (Copyright © 2013 John Wiley & Sons, Ltd.)
Uploads
Total Least Squares by Martin Plešinger
Regularization by Martin Plešinger
Other Papers by Martin Plešinger