Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2014
We study sparse approximation by greedy algorithms. We prove the Lebesgue-type inequalities for the Weak Chebyshev Greedy Algorithm (WCGA), a generalization of the Weak Orthogonal Matching Pursuit to the case of a Banach space. The main novelty of these results is a Banach space setting instead of a Hilbert space setting. The results are proved for redundant dictionaries satisfying certain conditions. Then we apply these general results to the case of bases. In particular, we prove that the WCGA provides almost optimal sparse approximation for the trigonometric system in L p , 2 ≤ p < ∞.
Forum of Mathematics, Sigma, 2014
We study sparse approximation by greedy algorithms. We prove the Lebesgue-type inequalities for the weak Chebyshev greedy algorithm (WCGA), a generalization of the weak orthogonal matching pursuit to the case of a Banach space. The main novelty of these results is a Banach space setting instead of a Hilbert space setting. The results are proved for redundant dictionaries satisfying certain conditions. Then we apply these general results to the case of bases. In particular, we prove that the WCGA provides almost optimal sparse approximation for the trigonometric system in$L_p$,$2\le p<\infty $.
IEEE Transactions on Information Theory
We study sparse approximation by greedy algorithms. Our contribution is twofold. First, we prove exact recovery with high probability of random K-sparse signals within ⌈K(1 + ǫ)⌉ iterations of the Orthogonal Matching Pursuit (OMP). This result shows that in a probabilistic sense the OMP is almost optimal for exact recovery. Second, we prove the Lebesgue-type inequalities for the Weak Chebyshev Greedy Algorithm, a generalization of the Weak Orthogonal Matching Pursuit to the case of a Banach space. The main novelty of these results is a Banach space setting instead of a Hilbert space setting. However, even in the case of a Hilbert space our results add some new elements to known results on the Lebesque-type inequalities for the RIP dictionaries. Our technique is a development of the recent technique created by Zhang.
2015
It is a survey on recent results in constructive sparse approximation. Three directions are discussed here: (1) Lebesgue-type inequalities for greedy algorithms with respect to a special class of dictionaries, (2) constructive sparse approximation with respect to the trigonometric system, (3) sparse approximation with respect to dictionaries with tensor product structure. In all three cases constructive ways are provided for sparse approximation. The technique used is based on fundamental results from the theory of greedy approximation. In particular, results in the direction (1) are based on deep methods developed recently in compressed sensing. We present some of these results with detailed proofs.
2009
Solving an under-determined system of equations for the sparsest solution has attracted considerable attention in recent years. Among the two well known approaches, the greedy algorithms like matching pursuits (MP) are simpler to implement and can produce satisfactory results under certain conditions. In this paper, we compare several greedy algorithms in terms of the sparsity of the solution vector and the approximation accuracy. We present two new greedy algorithms based on the recently proposed complementary matching pursuit (CMP) and the sensing dictionary framework, and compare them with the classical MP, CMP, and the sensing dictionary approach. It is shown that in the noise-free case, the complementary matching pursuit algorithm performs the best among these algorithms.
Signal Processing, 2006
A simultaneous sparse approximation problem requests a good approximation of several input signals at once using different linear combinations of the same elementary signals. At the same time, the problem balances the error in approximation against the total number of elementary signals that participate. These elementary signals typically model coherent structures in the input signals, and they are chosen from a large, linearly dependent collection. The first part of this paper proposes a greedy pursuit algorithm, called simultaneous orthogonal matching pursuit (S-OMP), for simultaneous sparse approximation. Then it presents some numerical experiments that demonstrate how a sparse model for the input signals can be identified more reliably given several input signals. Afterward, the paper proves that the S-OMP algorithm can compute provably good solutions to several simultaneous sparse approximation problems. The second part of the paper develops another algorithmic approach called convex relaxation, and it provides theoretical results on the performance of convex relaxation for simultaneous sparse approximation.
2018
We show the potential of greedy recovery strategies for the sparse approximation of multivariate functions from a small dataset of pointwise evaluations by considering an extension of the orthogonal matching pursuit to the setting of weighted sparsity. The proposed recovery strategy is based on a formal derivation of the greedy index selection rule. Numerical experiments show that the proposed weighted orthogonal matching pursuit algorithm is able to reach accuracy levels similar to those of weighted l1 minimization programs while considerably improving the computational efficiency.
2017
Sparse approximation of signals using often redundant and learned data dependent dictionaries has been successfully used in many applications in signal and image processing the last couple of decades. Finding the optimal sparse approximation is in general an NP complete problem and many suboptimal solutions have been proposed: greedy methods like Matching Pursuit (MP) and relaxation methods like Lasso. Algorithms developed for special dictionary structures can often greatly improve the speed, and sometimes the quality, of sparse approximation.
2005
A simple sparse approximation problem requests an approximation of a given input signal as a linear combination of T elementary signals drawn from a large, linearly dependent collection. An important generalization is simultaneous sparse approximation. Now one must approximate several input signals at once using different linear combinations of the same T elementary signals. This formulation appears, for example, when analyzing multiple observations of a sparse signal that have been contaminated with noise.
Constructive Approximation, 2016
This paper is concerned with the performance of Orthogonal Matching Pursuit (OMP) algorithms applied to a dictionary D in a Hilbert space H. Given an element f ∈ H, OMP generates a sequence of approximations f n , n = 1, 2,. . ., each of which is a linear combination of n dictionary elements chosen by a greedy criterion. It is studied whether the approximations f n are in some sense comparable to best n term approximation from the dictionary. One important result related to this question is a theorem of Zhang [8] in the context of sparse recovery of finite dimensional signals. This theorem shows that OMP exactly recovers n-sparse signal, whenever the dictionary D satisfies a Restricted Isometry Property (RIP) of order An for some constant A, and that the procedure is also stable in ℓ 2 under measurement noise. The main contribution of the present paper is to give a structurally simpler proof of Zhang's theorem, formulated in the general context of n term approximation from a dictionary in arbitrary Hilbert spaces H. Namely, it is shown that OMP generates near best n term approximations under a similar RIP condition.
2011 National Conference on Communications (NCC), 2011
Compressed Sensing (CS) provides a set of mathematical results showing that sparse signals can be exactly reconstructed from a relatively small number of random linear measurements. A particularly appealing greedy-approach to signal reconstruction from CS measurements is the so called Orthogonal Matching Pursuit (OMP). We propose two modifications to the basic OMP algorithm, which can be handy in different situations.
IEEE Transactions on Signal Processing, 2013
IEEE Transactions on Signal Processing, 2006
This report is the extension to the case of sparse approximations of our previous study on the effects of introducing a priori knowledge to solve the recovery of sparse representations when overcomplete dictionaries are used . Greedy algorithms and Basis Pursuit Denoising are considered in this work. Theoretical results show how the use of "reliable" a priori information (which in this work appears under the form of weights) can improve the performances of these methods. In particular, we generalize the sufficient conditions established by Tropp [2], [3] and Gribonval and Vandergheynst [4], that guarantee the retrieval of the sparsest solution, to the case where a priori information is used. We prove how the use of prior models at the signal decomposition stage influences these sufficient conditions. The results found in this work reduce to the classical case of [4] and [3] when no a priori information about the signal is available.
Information Theory, IEEE …, 2010
Orthogonal matching pursuit (OMP) is the canonical greedy algorithm for sparse approximation. In this paper we demonstrate that the restricted isometry property (RIP) can be used for a very straightforward analysis of OMP. Our main conclusion is that the RIP of order K+1 (with isometry constant δ 1 / (3 K^(1/2))) is sufficient for OMP to exactly recover any K-sparse signal. The analysis relies on simple and intuitive observations about OMP and matrices which satisfy the RIP. For restricted classes of K-sparse signals (those that are highly compressible), a relaxed bound on the isometry constant is also established. A deeper understanding of OMP may benefit the analysis of greedy algorithms in general. To demonstrate this, we also briefly revisit the analysis of the regularized OMP (ROMP) algorithm.
In recent years there has been a growing interest in sparse approximations. This is due to their vast amount of applications. The task of finding sparse approximations can be very difficult this is because there is no general method guaranteed to work in every situation. In fact, in certain cases there are no efficient methods for finding sparse approximations. This project considers the problems of finding sparse approximations and then examines the two most commonly used algorithms the Lasso and Orthogonal matching pursuit. Then goes on to discuss some practical applications of sparse approximations.
Signal Processing, 2006
A simultaneous sparse approximation problem requests a good approximation of several input signals at once using different linear combinations of the same elementary signals. At the same time, the problem balances the error in approximation against the total number of elementary signals that participate. These elementary signals typically model coherent structures in the input signals, and they are chosen from a large, linearly dependent collection.
We explore a new approach to reconstruct sparse signals from a set of projected measurements. Unlike older methods that rely on the near orthogonality property of the sampling matrix Φ for perfect reconstruction, our approach can be used to reconstruct signals where the columns of the sampling matrix need not be nearly orthogonal. We use the blockwise matrix inversion formula[13] to get a closed form expression for the increase or decrease in the L 2 norm of the residue obtained by eliminating or adding(respectively) to the assumed support of the unknown signal x. We use this formula to design a computationally tractable algorithm to obtain the non-zero components of the unknown signal x. Compared to popular existing sparsity seeking matching pursuit algorithms, each step of the proposed algorithm is locally optimal with respect to the actual objective function. Experiments show that our algorithm is significantly better than conventional techniques when the sparse coefficients are drawn from N (0, 1) or decays exponentially.
arXiv (Cornell University), 2022
This paper is devoted to theoretical aspects on optimality of sparse approximation. We undertake a quantitative study of new types of greedy-like bases that have recently arisen in the context of nonlinear m-term approximation in Banach spaces as a generalization of the properties that characterize almost greedy bases, i.e., quasi-greediness and democracy. As a means to compare the efficiency of these new bases with already existing ones in regards to the implementation of the Thresholding Greedy Algorithm, we place emphasis on obtaining estimates for their sequence of unconditionality parameters. Using an enhanced version of the original Dilworth-Kalton-Kutzarova method from [17] for building almost greedy bases, we manage to construct bidemocratic bases whose unconditionality parameters satisfy significantly worse estimates than almost greedy bases even in Hilbert spaces.
2009 IEEE International Conference on Acoustics, Speech and Signal Processing, 2009
This paper presents the orthogonal extension of the recently introduced complementary matching pursuit (CMP) algorithm for sparse approximation [1]. The CMP algorithm is analogous to the matching pursuit (MP) but done in the row-space of the dictionary matrix. It suffers from a similar sub-optimality as the MP. The orthogonal complementary matching pursuit algorithm (OCMP) presented here tries to remove this sub-optimality by updating the coefficients of all selected atoms at each iteration. Its development from the CMP follows the same procedure as of the orthogonal matching pursuit (OMP). In contrast with OMP, the residual errors resulting from the OCMP may not be orthogonal to all the atoms selected up to the respective iteration. Though the residual energy may increase over the OMP during the first iterations, it is shown that, compared with OMP, the convergence speed is increased in the subsequent iterations and the sparsity of the solution vector is improved.
2004
This report studies the effect of introducing a priori knowledge to recover sparse representations when overcomplete dictionaries are used. We focus mainly on Greedy algorithms and Basis Pursuit as for our algorithmic basement, while a priori is incorporated by suitably weighting the elements of the dictionary. A unique sufficient condition is provided under which Orthogonal Matching Pursuit, Matching Pursuit and Basis Pursuit are able to recover the optimally sparse representation of a signal when a priori information is available. Theoretical results show how the use of "reliable" a priori information can improve the performances of these algorithms. In particular, we prove that sufficient conditions to guarantee the retrieval of the sparsest solution can be established for dictionaries unable to satisfy the results of Gribonval and Vandergheynst [1] and Tropp [2]. As one might expect, our results reduce to the classical case of [1] and [2] when no a priori information is available. Some examples illustrate our theoretical findings.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.