Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
IEEE Transactions on Information Theory
We study sparse approximation by greedy algorithms. Our contribution is twofold. First, we prove exact recovery with high probability of random K-sparse signals within ⌈K(1 + ǫ)⌉ iterations of the Orthogonal Matching Pursuit (OMP). This result shows that in a probabilistic sense the OMP is almost optimal for exact recovery. Second, we prove the Lebesgue-type inequalities for the Weak Chebyshev Greedy Algorithm, a generalization of the Weak Orthogonal Matching Pursuit to the case of a Banach space. The main novelty of these results is a Banach space setting instead of a Hilbert space setting. However, even in the case of a Hilbert space our results add some new elements to known results on the Lebesque-type inequalities for the RIP dictionaries. Our technique is a development of the recent technique created by Zhang.
2014
We study sparse approximation by greedy algorithms. We prove the Lebesgue-type inequalities for the Weak Chebyshev Greedy Algorithm (WCGA), a generalization of the Weak Orthogonal Matching Pursuit to the case of a Banach space. The main novelty of these results is a Banach space setting instead of a Hilbert space setting. The results are proved for redundant dictionaries satisfying certain conditions. Then we apply these general results to the case of bases. In particular, we prove that the WCGA provides almost optimal sparse approximation for the trigonometric system in L p , 2 ≤ p < ∞.
Forum of Mathematics, Sigma, 2014
We study sparse approximation by greedy algorithms. We prove the Lebesgue-type inequalities for the weak Chebyshev greedy algorithm (WCGA), a generalization of the weak orthogonal matching pursuit to the case of a Banach space. The main novelty of these results is a Banach space setting instead of a Hilbert space setting. The results are proved for redundant dictionaries satisfying certain conditions. Then we apply these general results to the case of bases. In particular, we prove that the WCGA provides almost optimal sparse approximation for the trigonometric system in$L_p$,$2\le p<\infty $.
Constructive Approximation, 2016
This paper is concerned with the performance of Orthogonal Matching Pursuit (OMP) algorithms applied to a dictionary D in a Hilbert space H. Given an element f ∈ H, OMP generates a sequence of approximations f n , n = 1, 2,. . ., each of which is a linear combination of n dictionary elements chosen by a greedy criterion. It is studied whether the approximations f n are in some sense comparable to best n term approximation from the dictionary. One important result related to this question is a theorem of Zhang [8] in the context of sparse recovery of finite dimensional signals. This theorem shows that OMP exactly recovers n-sparse signal, whenever the dictionary D satisfies a Restricted Isometry Property (RIP) of order An for some constant A, and that the procedure is also stable in ℓ 2 under measurement noise. The main contribution of the present paper is to give a structurally simpler proof of Zhang's theorem, formulated in the general context of n term approximation from a dictionary in arbitrary Hilbert spaces H. Namely, it is shown that OMP generates near best n term approximations under a similar RIP condition.
2011 National Conference on Communications (NCC), 2011
Compressed Sensing (CS) provides a set of mathematical results showing that sparse signals can be exactly reconstructed from a relatively small number of random linear measurements. A particularly appealing greedy-approach to signal reconstruction from CS measurements is the so called Orthogonal Matching Pursuit (OMP). We propose two modifications to the basic OMP algorithm, which can be handy in different situations.
2015
It is a survey on recent results in constructive sparse approximation. Three directions are discussed here: (1) Lebesgue-type inequalities for greedy algorithms with respect to a special class of dictionaries, (2) constructive sparse approximation with respect to the trigonometric system, (3) sparse approximation with respect to dictionaries with tensor product structure. In all three cases constructive ways are provided for sparse approximation. The technique used is based on fundamental results from the theory of greedy approximation. In particular, results in the direction (1) are based on deep methods developed recently in compressed sensing. We present some of these results with detailed proofs.
Information Theory, IEEE …, 2010
Orthogonal matching pursuit (OMP) is the canonical greedy algorithm for sparse approximation. In this paper we demonstrate that the restricted isometry property (RIP) can be used for a very straightforward analysis of OMP. Our main conclusion is that the RIP of order K+1 (with isometry constant δ 1 / (3 K^(1/2))) is sufficient for OMP to exactly recover any K-sparse signal. The analysis relies on simple and intuitive observations about OMP and matrices which satisfy the RIP. For restricted classes of K-sparse signals (those that are highly compressible), a relaxed bound on the isometry constant is also established. A deeper understanding of OMP may benefit the analysis of greedy algorithms in general. To demonstrate this, we also briefly revisit the analysis of the regularized OMP (ROMP) algorithm.
—We consider the orthogonal matching pursuit (OMP) algorithm for the recovery of a high-dimensional sparse signal based on a small number of noisy linear measurements. OMP is an iterative greedy algorithm that selects at each step the column, which is most correlated with the current residuals. In this paper, we present a fully data driven OMP algorithm with explicit stopping rules. It is shown that under conditions on the mutual incoherence and the minimum magnitude of the nonzero components of the signal, the support of the signal can be recovered exactly by the OMP algorithm with high probability. In addition, we also consider the problem of identifying significant components in the case where some of the nonzero components are possibly small. It is shown that in this case the OMP algorithm will still select all the significant components before possibly selecting incorrect ones. Moreover, with modified stopping rules, the OMP algorithm can ensure that no zero components are selected.
Corr, 2011
Orthogonal Matching Pursuit (OMP) has long been considered a powerful heuristic for attacking compressive sensing problems; however, its theoretical development is, unfortunately, somewhat lacking. This paper presents an improved Restricted Isometry Property (RIP) based performance guarantee for ܶ-sparse signal reconstruction that asymptotically approaches the conjectured lower bound given in Davenport et al. We also further extend the state-of-the-art by deriving reconstruction error bounds for the case of general non-sparse signals subjected to measurement noise. We then generalize our results to the case of K-fold Orthogonal Matching Pursuit (KOMP). We finish by presenting an empirical analysis suggesting that OMP and KOMP outperform other compressive sensing algorithms in average case scenarios. This turns out to be quite surprising since RIP analysis (i.e. worst case scenario) suggests that these matching pursuits should perform roughly T^0.5 times worse than convex optimization, CoSAMP, and Iterative Thresholding.
2014 IEEE Workshop on Statistical Signal Processing (SSP), 2014
In this paper we define a new coherence index, named the global 2-coherence, of a given dictionary and study its relationship with the traditional mutual coherence and the restricted isometry constant. By exploring this relationship, we obtain more general results on sparse signal reconstruction using greedy algorithms in the compressive sensing (CS) framework. In particular, we obtain an improved bound over the best known results on the restricted isometry constant for successful recovery of sparse signals using orthogonal matching pursuit (OMP).
2009
Solving an under-determined system of equations for the sparsest solution has attracted considerable attention in recent years. Among the two well known approaches, the greedy algorithms like matching pursuits (MP) are simpler to implement and can produce satisfactory results under certain conditions. In this paper, we compare several greedy algorithms in terms of the sparsity of the solution vector and the approximation accuracy. We present two new greedy algorithms based on the recently proposed complementary matching pursuit (CMP) and the sensing dictionary framework, and compare them with the classical MP, CMP, and the sensing dictionary approach. It is shown that in the noise-free case, the complementary matching pursuit algorithm performs the best among these algorithms.
IEEE Transactions on Signal Processing, 2013
Accepted in Signal Processing Letter on 21.8.2013
"Generalized Orthogonal Matching Pursuit (gOMP) is a natural extension of OMP algorithm where unlike OMP, it may select $N (\geq1)$ atoms in each iteration. In this paper, we demonstrate that gOMP can successfully reconstruct a $K$-sparse signal from a compressed measurement $ {\bf y}={\bf \Phi x}$ by $K^{th}$ iteration if the sensing matrix ${\bf \Phi}$ satisfies restricted isometry property (RIP) of order $NK$ where $\delta_{NK} < \frac {\sqrt{N}}{\sqrt{K}+2\sqrt{N}}$. Our bound offers an improvement over the very recent result shown in \cite{wang_2012b}. Moreover, we present another bound for gOMP of order $NK+1$ with $\delta_{NK+1} < \frac {\sqrt{N}}{\sqrt{K}+\sqrt{N}}$ which exactly relates to the near optimal bound of $\delta_{K+1} < \frac {1}{\sqrt{K}+1}$ for OMP (N=1) as shown in \cite{wang_2012a}."
Signal Processing, 2006
A simultaneous sparse approximation problem requests a good approximation of several input signals at once using different linear combinations of the same elementary signals. At the same time, the problem balances the error in approximation against the total number of elementary signals that participate. These elementary signals typically model coherent structures in the input signals, and they are chosen from a large, linearly dependent collection. The first part of this paper proposes a greedy pursuit algorithm, called simultaneous orthogonal matching pursuit (S-OMP), for simultaneous sparse approximation. Then it presents some numerical experiments that demonstrate how a sparse model for the input signals can be identified more reliably given several input signals. Afterward, the paper proves that the S-OMP algorithm can compute provably good solutions to several simultaneous sparse approximation problems. The second part of the paper develops another algorithmic approach called convex relaxation, and it provides theoretical results on the performance of convex relaxation for simultaneous sparse approximation.
ArXiv, 2019
Compressive Sensing (CS) is a new paradigm for the efficient acquisition of signals that have sparse representation in a certain domain. Traditionally, CS has provided numerous methods for signal recovery over an orthonormal basis. However, modern applications have sparked the emergence of related methods for signals not sparse in an orthonormal basis but in some arbitrary, perhaps highly overcomplete, dictionary, particularly due to their potential to generate different kinds of sparse representation of signals. To this end, we apply a signal space greedy method, which relies on the ability to optimally project a signal onto a small number of dictionary atoms, to address signal recovery in this setting. We describe a generalized variant of the iterative recovery algorithm called Signal space Subspace Pursuit (SSSP) for this more challenging setting. Here, using the Dictionary-Restricted Isometry Property (D-RIP) rather than classical RIP, we derive a low bound on the number of meas...
2005
A simple sparse approximation problem requests an approximation of a given input signal as a linear combination of T elementary signals drawn from a large, linearly dependent collection. An important generalization is simultaneous sparse approximation. Now one must approximate several input signals at once using different linear combinations of the same T elementary signals. This formulation appears, for example, when analyzing multiple observations of a sparse signal that have been contaminated with noise.
Digital Signal Processing, 2012
Compressed sensing is a developing field aiming at reconstruction of sparse signals acquired in reduced dimensions, which make the recovery process under-determined. The required solution is the one with minimum ℓ 0 norm due to sparsity, however it is not practical to solve the ℓ 0 minimization problem. Commonly used techniques include ℓ 1 minimization, such as Basis Pursuit (BP) and greedy pursuit algorithms such as Orthogonal Matching Pursuit (OMP) and Subspace Pursuit (SP). This manuscript proposes a novel semi-greedy recovery approach, namely A* Orthogonal Matching Pursuit (A*OMP). A*OMP performs A* search to look for the sparsest solution on a tree whose paths grow similar to the Orthogonal Matching Pursuit (OMP) algorithm. Paths on the tree are evaluated according to a cost function, which should compensate for different path lengths. For this purpose, three different auxiliary structures are defined, including novel dynamic ones. A*OMP also incorporates pruning techniques which enable practical applications of the algorithm. Moreover, the adjustable search parameters provide means for a complexity-accuracy trade-off. We demonstrate the reconstruction ability of the proposed scheme on both synthetically generated data and images using Gaussian and Bernoulli observation matrices, where A*OMP yields less reconstruction error and higher exact recovery frequency than BP, OMP and SP. Results also indicate that novel dynamic cost functions provide improved results as compared to a conventional choice.
ArXiv, 2021
We study here sparse recovery problems in the presence of additive noise. We analyze a thresholding version of the CoSaMP algorithm, named Thresholding Greedy Pursuit (TGP). We demonstrate that an appropriate choice of thresholding parameter, even without the knowledge of sparsity level of the signal and strength of the noise, can result in exact recovery with no false discoveries as the dimension of the data increases to infinity.
This paper studies a fundamental problem that arises in sparse representation and compressed sensing community: can greedy algorithms give us a stable recovery from incomplete and contaminated observations ? Using the Regularized Orthogonal Matching Pursuit (ROMP) algorithm, a modified version of Orthogonal Matching Pursuit (OMP) [1], which was recently introduced by D.Needell and R.Vershynin [2], we assert that ROMP is stable and guarantees approximate recovery of non-sparse signals, as good as the Basis Pursuit algorithm . We also will set up criterions at which the algorithm halts, and the upper bounds for the reconstructed error. It will be proved in the paper that these upper bounds are proportional to the noise energy.
IEEE Transactions on Signal Processing, 2015
Greed is good. However, the tighter you squeeze, the less you have. In this paper, a less greedy algorithm for sparse signal reconstruction in compressive sensing, named orthogonal matching pursuit with thresholding is studied. Using the global 2-coherence , which provides a "bridge" between the well known mutual coherence and the restricted isometry constant, the performance of orthogonal matching pursuit with thresholding is analyzed and more general results for sparse signal reconstruction are obtained. It is also shown that given the same assumption on the coherence index and the restricted isometry constant as required for orthogonal matching pursuit, the thresholding variation gives exactly the same reconstruction performance with significantly less complexity.
IEEE Transactions on Signal Processing, 2015
Orthogonal Matching Pursuit (OMP) and Basis Pursuit (BP) are two well-known recovery algorithms in compressed sensing. To recover a d-dimensional m-sparse signal with high probability, OMP needs O (m ln d) number of measurements, whereas BP needs only O m ln d m number of measurements. In contrary, OMP is a practically more appealing algorithm due to its superior execution speed. In this piece of work, we have proposed a scheme that brings the required number of measurements for OMP closer to BP. We have termed this scheme as OMPα, which runs OMP for (m + αm)-iterations instead of m-iterations, by choosing a value of α ∈ [0, 1]. It is shown that OMPα guarantees a high probability signal recovery with O m ln d αm +1 number of measurements. Another limitation of OMP unlike BP is that it requires the knowledge of m. In order to overcome this limitation, we have extended the idea of OMPα to illustrate another recovery scheme called OMP∞, which runs OMP until the signal residue vanishes. It is shown that OMP∞ can achieve a close to 0-norm recovery without any knowledge of m like BP. Theorem 1 (Theorem 1 of [2]). Let N ≥ C 1 m ln d m , and Φ has N × d Gaussian i.i.d entries. The following statement is true with probability exceeding 1 − e −c1N. It is possible
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.