Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2010, Constructive Approximation
A quantitative comparison of Pulse Code Modulation (PCM) and Sigma–Delta (Σ Δ) quantization methods is made in the setting of finite frames. Frames allow for redundant, overcomplete signal decompositions. PCM and Σ Δ are two industry-standard quantization methods, and the setting of finite frames is appropriate for a host of modern applications. Previous results for this comparison are known for upper error bounds, where Σ Δ performs better in the setting of frames, as opposed to orthonormal bases, where PCM is optimal. We answer the following question: For which signals x is the PCM error, that is, the norm of the difference between x and its PCM approximant, less than the Σ Δ error? We prove that, typically, in the setting of frames, Σ Δ outperforms PCM, but not always.
IEEE Transactions on Information Theory, 2006
The -level Sigma-Delta (61) scheme with step size is introduced as a technique for quantizing finite frame expansions for . Error estimates for various quantized frame expansions are derived, and, in particular, it is shown that 61 quantization of a unit-norm finite frame expansion in achieves approximation error 2 ( ( ) + 1) where is the frame size, and the frame variation ( ) is a quantity which reflects the dependence of the 61 scheme on the frame. Here is the -dimensional Euclidean 2-norm. Lower bounds and refined upper bounds are derived for certain specific cases. As a direct consequence of these error bounds one is able to bound the mean squared error (MSE) by an order of 1 2 . When dealing with sufficiently redundant frame expansions, this represents a significant improvement over classical pulse-code modulation (PCM) quantization, which only has MSE of order 1 under certain nonrigorous statistical assumptions. 61 also achieves the optimal MSE order for PCM with consistent reconstruction.
Ieee Transactions on Information Theory, 2010
In this paper, we extend the results that we derived in [1], [2] to the case of filter banks (FBs) based transmission. We consider first-and second-order sigma-delta (SD) quantization in the context of an oversampled digital Fourier transform (DFT) FBs (DFT-FBs). In this context, we investigate the case of Oddand Even-stacked DFT FBs. We establish the set of conditions that guarantee that the reconstruction minimum squares error (MSE) behaves as 1 r where r denotes the frame redundancy and we derive the corresponding MSE upper-bounds closed-form expressions. The obtained results demonstrate that overoversampled FBs that are subject to the first-and second-order SD can exhibit a reconstruction error behavior according to 1 r . Furthermore, the established results are shown to be true under the quantization model used in [3]-[6], as well as under the widely used additive white quantization noise assumption.
IEEE Transactions on Information Theory, 2000
In this paper, we extend the results that we derived in [1], [2] to the case of filter banks (FBs) based transmission. We consider first-and second-order sigma-delta (SD) quantization in the context of an oversampled digital Fourier transform (DFT) FBs (DFT-FBs). In this context, we investigate the case of Oddand Even-stacked DFT FBs. We establish the set of conditions that guarantee that the reconstruction minimum squares error (MSE) behaves as 1 r where r denotes the frame redundancy and we derive the corresponding MSE upper-bounds closed-form expressions. The obtained results demonstrate that overoversampled FBs that are subject to the first-and second-order SD can exhibit a reconstruction error behavior according to 1 r . Furthermore, the established results are shown to be true under the quantization model used in [3]-[6], as well as under the widely used additive white quantization noise assumption.
Contemporary Mathematics, 2008
We record a C-alphabet case for Σ∆ quantization for finite frames. The basic theory and error analysis are presented in the case of bounded frame variation for a given sequence {F N } for frames for C d. If bounded frame variation is not available for the given sequence, then there is still a satisfactory error analysis depending on the correct permutation of each F N. An algorithm is designed to construct this permutation, and relevant simulations and examples are given.
preprint, April, 2005
The White Noise Hypothesis (WNH), introduced by Bennett half century ago, assumes that in the pulse code modulation (PCM) quantization scheme the errors in individual channels behave like white noise, i.e. they are independent and identically distributed random variables. The WNH is key to estimating the means square quantization error (MSE). But is the WNH valid? In this paper we take a close look at the WNH. We show that in a redundant system the errors from individual channels can never be independent. Thus to an extend the WNH is invalid. Our numerical experients also indicate that with coarse quantization the WNH is far from being valid. However, as the main result of this paper we show that with fine quantizations the WNH is essentially valid, in which the errors from individual channels become asymptotically pairwise independent, each uniformly distributed in [−∆/2, ∆/2), where ∆ denotes the stepsize of the quantization.
IEEE Transactions on Communications, 1976
The technique of nearly instantaneous companding (NIC) that we describe processes n-bit p-law or A-law encoded pulse-code modulation (PCM) to a reduced bit rate. A block of N samples (typically N zz 10) is searched for the sample having the largest magnitude, and each sample in the block is then reencoded to a nearly uniform quantization having (n -2) bits and an overload point at the top of the chord of the maximum sample. Since an encoding of this chord must be sent to the receiver along with the uniform reencoding, the resulting bit rate is fs(n -2 , + 3/N) bits/s where& is the sampling rate. The algorithm can be viewed as ?n adaptive PCM algorithm that is compatible with the widely used p-law and A:-law companded PCM. Theoretical and empirical evidence is presented which indicates a performance slightly better than (n -1) bit companded PCM (the bit rate is close to that of (n -2) bit PCM). A feature which distinguishes NIC from most other bit-rate reduction techniques is a performance that is largely insensitive to the statiStics of the input signal. ' In addition, we assume mid-tread bias and decision level assignment. Extensions are straightforward.
Acoustics, Speech, and …, 2004
In signal processing, one of the primary goals is to obtain a digital representation of the signal of interest that is suit-able for storage, transmission, and recovery. In general, the first step towards this objective is finding an atomic decom-position of the signal. More precisely, one ...
IEEE Transactions on Communications, 1982
the case for more conventional channels. This turns out to be only partly true with the gains more modest than anticipated. The Steiner systems, used here simply because of their regular intersection or distance properties, appear to offer savings over the orthogonal systems at the lower bandwidths. Even the simple DBL system performs slightly better than the orthogonal case for all bandwidths considered. At the higher bandwidths 512 and 1024.(not reported on here), however, the improvement, while perhaps significant, may not justify the added complexity. Nonetheless, the tradeoffs among the various parameters make it an interesting study and, for a given bandwidth, nonorthogonal schemes of the type considered here should be of use.
Oversampled signals Deterministic quantization model in frequency domain Semi-infinite programming Optimal design of periodic code Modified gradient descent method This paper proposes to reduce the quantization noise using a periodic code, derives a condition for achieving an improvement on the signal to noise ratio (SNR) performance, and proposes an optimal design for the periodic code. To reduce the quantization noise, oversampled input signals are first multiplied by the periodic code and then quantized via a quantizer. The signals are reconstructed via multiplying the quantized signals by the same periodic code and then passing through an ideal lowpass filter. To derive the condition for achieving an improvement on the SNR performance, first the quantization operator is modeled by a deterministic polynomial function. The coefficients in the polynomial function are defined in such a way that the total energy difference between the quantization function and the polynomial function is minimized subject to a specification on the upper bound of the absolute difference. This problem is actually a semi-infinite programming problem and our recently proposed dual parameterization method is employed for finding the globally optimal solution. Second, the condition for improving the SNR performance is derived via a frequency domain formulation. To optimally design the periodic code such that the SNR performance is maximized, a modified gradient descent method that can avoid the obtained solution to be trapped in a locally optimal point and guarantee its convergence is proposed. Computer numerical simulation results show that the proposed system could achieve a significant improvement compared to existing systems such as the conventional system without multiplying to the periodic code, the system with an additive dithering and a first order sigma delta modulator.
2011 24th Canadian Conference on Electrical and Computer Engineering(CCECE), 2011
ITU-T G.711.1 is a multirate wideband extension for the wellknown ITU-T G.711 pulse code modulation of voice frequencies. The extended system is fully interoperable with the legacy narrowband one. In the case where the legacy G.711 is used to code a speech signal and G.711.1 is used to decode it, quantization noise may be audible. For this situation, the standard proposes an optional postfilter. The application of postfiltering requires an estimation of the quatization noise. In this paper we review the process of estimating this coding noise and we propose a better noise estimator.
2010 44th Annual Conference on Information Sciences and Systems (CISS), 2010
Recent results make it clear that the compressed sensing paradigm can be used effectively for dimension reduction. On the other hand, the literature on quantization of compressed sensing measurements is relatively sparse, and mainly focuses on pulse-code-modulation (PCM) type schemes where each measurement is quantized independently using a uniform quantizer, say, of step size δ. The robust recovery result of Candès et al. and Donoho guarantees that in this case, under certain generic conditions on the measurement matrix such as the restricted isometry property, 1 recovery yields an approximation of the original sparse signal with an accuracy of O(δ). In this paper, we propose sigma-delta quantization as a more effective alternative to PCM in the compressed sensing setting. We show that if we use an rth order sigma-delta scheme to quantize m compressed sensing measurements of a k-sparse signal in R N , the reconstruction accuracy can be improved by a factor of (m/k) (r−1/2)α for any 0 < α < 1 if m r k(log N ) 1/(1−α) (with high probability on the measurement matrix). This is achieved by employing an alternative recovery method via rth-order Sobolev dual frames.
2012
Abstract Estimation of a vector from quantized linear measurements is a common problem for which simple linear techniques are suboptimalsometimes greatly so. This paper develops message-passing dequantization (MPDQ) algorithms for minimum mean-squared error estimation of a random vector from quantized linear measurements, notably allowing the linear expansion to be overcomplete or undercomplete and the scalar quantization to be regular or non-regular.
1993
Linear predictive coding (LPC) parameters are widely used in various speech coding applications for representing the spectral envelope information of speech. In our earlier paper [l], we have reported on a vector quantizer using line spectral frequencies and shown that it can quantize LPC parameters in 24 bits/frame with an average spectral distortion of 1 dB, less than 2% frames having spectral distortion in the range 2-4 dB and no frame having specVal distortion greater than 4 dB. In this paper, we study the performance of this quantizer in the presence of channel errors and compare it with that of scalar quantizer. We also investigate the use of error correcting codes for improving the performance of the vector quantizer in the presence of channel errors.
IEEE Transactions on Speech and Audio Processing, 2004
The direct use of vector quantization (VQ) to encode LPC parameters in a communication system suffers from the following two limitations: 1) complexity of implementation for large vector dimensions and codebook sizes and 2) sensitivity to errors in the received indices due to noise in the communication channel. In the past, these issues have been simultaneously addressed by designing channel matched multistage vector quantizers (CM-MSVQ). A sub-optimal sequential design procedure has been used to train the codebooks of the CM-MSVQ. In this paper, a novel channel-optimized multistage vector quantization (CO-MSVQ) codec is presented, in which the stage codebooks are jointly designed. The proposed codec uses a source and channel-dependent distortion measure to encode line spectral frequencies derived from segments of a speech signal. Extensive simulation results are provided to demonstrate the consistent reduction in both the mean and the variance of the spectral distortion obtained using the proposed codec relative to the conventional sequentially designed CM-MSVQ. Furthermore, the perceptual quality of the reconstructed speech using the proposed codec was found to be better than that obtained using the sequentially designed CM-MSVQ.
2002
Progressive quantization is studied for transmission over noisy channels. For a finite set of decodable transmission rates, bounds on the minimum mean-squared distortion are derived. An asympotically optimal schedule of channel code rates is derived as a function of the transmission rate.
IEEE Transactions on Speech and Audio Processing, 1993
Linear predictive coding (LPC) parameters are widely used in various speech processing applications for representing the spectral envelope information of speech. For low bit rate speech-coding applications, it is important to quantize these parameters accurately using as few bits as possible. Though the vector quantizers are more efficient than the scalar quantizers, their use for accurate quantization of LPC information (using 2&26 bitdframes) is impeded due to their prohibitively high complexity. In this paper, a split vector quantization approach is used to overcome the complexity problem. Here, the LPC vector consisting of 10 line spectral frequencies (LSF's) is divided into two parts and each part is quantized separately using vector quantization. Using the localized spectral sensitivity property of the LSF parameters, a weighted LSF distance measure is proposed. Using this distance measure, it is shown that the split vector quantizer can quantize LPC information in 24 biWframe with an average spectral distortion of 1 dB and less than 2% frames having spectral distortion greater than 2 dB. Effect of channel errors on the performance of this quantizer is also investigated and results are reported.
2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07, 2007
We consider sigma-delta quantization of Cyclic Geometrically Uniform (CGU) nites frames, family of frames containing nite harmonic frames (both in C M and R M ). For rst-and second-order sigma-delta quantizers, we establish that the reconstruction minimum squares error (MSE) behaves as 1 r 2 where r denotes the frame redundancy. This result is shown to be true both under the quantization model used in as well as under the widely used additive white quantization noise assumption. For the widely used L-th order noise shaping lter G(z) = (1 − z −1 ) L , we show that the MSE behaves as 1 r 2 irrespectively of the lter order L. More importantly, we prove also that in the case of tight and normalized CGU frame, when the frame length is too large compared to the lter order, the reconstruction MSE can decay as faster as O( 1 r 2L+1 ).
IEEE Transactions on Information Theory, 2005
For the security of K, note that X is assumed to have min-entropy b even given A 1 ; B 1 ; . . . ; A u ; B u , and we can assume that X; X i ; X i ; . . . ; X i are mutually independent as they are generated in distant places. Thus, Theorem 2 implies khX i ; . . . ; X i ; Ki 0 hX i ; . . . ; X i ; Uik 2 0(b+k+20n0m)=2
Suppose that the collection {ei} m i=1 forms a frame for R k , where each entry of the vector ei is a sub-Gaussian random variable. We consider expansions in such a frame, which are then quantized using a Sigma-Delta scheme. We show that an arbitrary signal in R k can be recovered from its quantized frame coefficients up to an error which decays root-exponentially in the oversampling rate m/k. Here the quantization scheme is assumed to be chosen appropriately depending on the oversampling rate and the quantization alphabet can be coarse. The result holds with high probability on the draw of the frame uniformly for all signals. The crux of the argument is a bound on the extreme singular values of the product of a deterministic matrix and a sub-Gaussian frame. For fine quantization alphabets, we leverage this bound to show polynomial error decay in the context of compressed sensing. Our results extend previous results for structured deterministic frame expansions and Gaussian compressed sensing measurements. compressed sensing, quantization, random frames, root-exponential accuracy, Sigma-Delta, sub-Gaussian matrices 2010 Math Subject Classification: 94A12, 94A20, 41A25, 15B52
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.