Papers by Karen Egiazarian
The procedure is based on blind source separation (BSS) and, in contrast to methods already avail... more The procedure is based on blind source separation (BSS) and, in contrast to methods already available in the literature, it is completely automated and does not require the availability of peri-ocular EOG electrodes. The proposed approach removed most EOG artifacts in 6 longterm EEG recordings containing epilectic seizures without distorting the recorded ictal activity.
IEEE Signal Processing Letters, 2000
We propose a procedure for stack filter design that takes into consideration the filter's sample ... more We propose a procedure for stack filter design that takes into consideration the filter's sample selection probabilitites. A statistical optimization of stack filters can result in a class of stack filters, all of which are statistically equivalent. Such a situation arises in cases of nonsymmetric noise distributions or in the presence of constraints. Among the set of equivalent stack filters, our method constructs a statistically optimal stack filter whose sample selection probabilities are concentrated in the center of its window. This leads to improvement of detail preservation.
IEEE Signal Processing Letters, 2000
In this letter, we provide a method for deriving the output distribution function of any recursiv... more In this letter, we provide a method for deriving the output distribution function of any recursive stack filter. In particular, we give the output distribution of the recursive median filter. The method used relies on finite automata and Markov chain theory. The distribution of any recursive stack filter is expressed as a vector multiplication of steady-state probabilities by the truth table vector of the Boolean function defining the filter.
IEEE Signal Processing Letters, 2004
Forcing functions represent an important class of Boolean functions that have been extensively st... more Forcing functions represent an important class of Boolean functions that have been extensively studied in the analysis of the dynamics of random Boolean networks as models of genetic regulatory systems. Several other socalled Post classes of Boolean functions are closely related to forcing functions and have been used in learning theory as well as in control systems. We develop novel spectral algorithms to test membership of a Boolean function in these classes. These algorithms are highly efficient and are essential in learning problems, especially in the context of genetic regulatory networks, where the same learning procedures are applied repeatedly.
IEEE Sensors Journal, 2000
In this paper we present a new method for measuring the temporal noise in the raw-data of digital... more In this paper we present a new method for measuring the temporal noise in the raw-data of digital imaging sensors (e.g., CMOS, CCD). The method is specially designed to estimate the variance function which describes the signal-dependent noise found in raw-data. It gives the standard-deviation of the noise as a function of the expectation of the pixel raw-data output value.
IEEE Journal of Selected Topics in Signal Processing, 2000
In this paper we propose a novel efficient adaptive binary arithmetic coder which is multiplicati... more In this paper we propose a novel efficient adaptive binary arithmetic coder which is multiplication-free and requires no look-up tables. To achieve this, we combine the probability estimation based on a virtual sliding window with the approximation of multiplication and the use of simple operations to calculate the next approximation after the encoding of each binary symbol. We show that in comparison with the M-coder the proposed algorithm provides comparable computational complexity, less memory footprint and bitrate savings from 0.5 to 2.3% on average for H.264/AVC standard and from 0.6 to 3.6% on average for HEVC standard.
IEEE Communications Letters, 2000
Abstract In recently proposed cooperative overtaking assistance systems a video stream captured b... more Abstract In recently proposed cooperative overtaking assistance systems a video stream captured by a windshield-mounted camera in a vehicle is compressed and broadcast to the vehicle driving behind it, where it is displayed to the driver. It has been shown that this system can provide robust operation if video codec channel adaptation is undertaken by exploiting information from the cooperative awareness messages about any forthcoming increases in the multiple access channel load. In this letter we demonstrate the gains ...

EURASIP Journal on Embedded Systems, 2007
The paper presents a multiple description (MD) video coder based on three-dimensional (3D) transf... more The paper presents a multiple description (MD) video coder based on three-dimensional (3D) transforms. Two balanced descriptions are created from a video sequence. In the encoder, video sequence is represented in a form of coarse sequence approximation (shaper) included in both descriptions and residual sequence (details) which is split between two descriptions. The shaper is obtained by block-wise pruned 3D-DCT. The residual sequence is coded by 3D-DCT or hybrid, LOT+DCT, 3D-transform. The coding scheme is targeted to mobile devices. It has low computational complexity and improved robustness of transmission over unreliable networks. The coder is able to work at very low redundancies. The coding scheme is simple, yet it outperforms some MD coders based on motion-compensated prediction, especially in the low-redundancy region. The margin is up to 3 dB for reconstruction from one description.

EURASIP Journal on Advances in Signal Processing, 2012
View-plus-depth is a scene representation format where each pixel of a color image or video frame... more View-plus-depth is a scene representation format where each pixel of a color image or video frame is augmented by per-pixel depth represented as gray-scale image (map). In the representation, the quality of the depth map plays a crucial role as it determines the quality of the rendered views. Among the artifacts in the received depth map, the compression artifacts are usually most pronounced and considered most annoying. In this article, we study the problem of post-processing of depth maps degraded by improper estimation or by block-transformbased compression. A number of post-filtering methods are studied, modified and compared for their applicability to the task of depth map restoration and post-filtering. The methods range from simple and trivial Gaussian smoothing, to in-loop deblocking filter standardized in H.264 video coding standard, to more comprehensive methods which utilize structural and color information from the accompanying color image frame. The latter group contains our modification of the powerful local polynomial approximation, the popular bilateral filter, and an extension of it, originally suggested for depth super-resolution. We further modify this latter approach by developing an efficient implementation of it. We present experimental results demonstrating high-quality filtered depth maps and offering practitioners options for highest-quality or better efficiency.

EURASIP Journal on Advances in Signal Processing, 2007
This work addresses the problem of signal-dependent noise removal in images. An adaptive nonlinea... more This work addresses the problem of signal-dependent noise removal in images. An adaptive nonlinear filtering approach in the orthogonal transform domain is proposed and analyzed for several typical noise environments in the DCT domain. Being applied locally, that is, within a window of small support, DCT is expected to approximate the Karhunen-Loeve decorrelating transform, which enables effective suppression of noise components. The detail preservation ability of the filter allowing not to destroy any useful content in images is especially emphasized and considered. A local adaptive DCT filtering for the two cases, when signaldependent noise can be and cannot be mapped into additive uncorrelated noise with homomorphic transform, is formulated. Although the main issue is signal-dependent and pure multiplicative noise, the proposed filtering approach is also found to be competing with the state-of-the-art methods on pure additive noise corrupted images.

EURASIP Journal on Advances in Signal Processing, 2010
This paper concerns lossy compression of images corrupted by additive noise. The main contributio... more This paper concerns lossy compression of images corrupted by additive noise. The main contribution of the paper is that analysis is carried out from the viewpoint of compressed image visual quality. Several coders for which the compression ratio is controlled in different manner are considered. Visual quality metrics that are the most adequate for the considered application (WSNR, MSSIM, PSNR-HVS-M, and PSNR-HVS) are used. It is demonstrated that under certain conditions visual quality of compressed images can be slightly better than quality of original noisy images due to image filtering through lossy compression. The "optimal" parameters of coders for which this positive effect can be observed depend upon standard deviation of the noise. This allows proposing automatic procedure for compressing noisy images in the neighborhood of optimal operation point, that is, when visual quality either improves or degrades insufficiently. Comparison results for a set of grayscale test images and several variances of noise are presented.

EURASIP Journal on Advances in Signal Processing, 2011
This article addresses under which conditions filtering can visibly improve the image quality. Th... more This article addresses under which conditions filtering can visibly improve the image quality. The key points are the following. First, we analyze filtering efficiency for 25 test images, from the color image database TID2008. This database allows assessing filter efficiency for images corrupted by different noise types for several levels of noise variance. Second, the limit of filtering efficiency is determined for independent and identically distributed (i.i.d.) additive noise and compared to the output mean square error of state-of-the-art filters. Third, component-wise and vector denoising is studied, where the latter approach is demonstrated to be more efficient. Fourth, using of modern visual quality metrics, we determine that for which levels of i.i.d. and spatially correlated noise the noise in original images or residual noise and distortions because of filtering in output images are practically invisible. We also demonstrate that it is possible to roughly estimate whether or not the visual quality can clearly be improved by filtering.

EURASIP Journal on Advances in Signal Processing, 2012
The discrete cosine transform (DCT) offers superior energy compaction properties for a large clas... more The discrete cosine transform (DCT) offers superior energy compaction properties for a large class of functions and has been employed as a standard tool in many signal and image processing applications. However, it suffers from spurious behavior in the vicinity of edge discontinuities in piecewise smooth signals. To leverage the sparse representation provided by the DCT, in this article, we derive a framework for the inverse polynomial reconstruction in the DCT expansion. It yields the expansion of a piecewise smooth signal in terms of polynomial coefficients, obtained from the DCT representation of the same signal. Taking advantage of this framework, we show that it is feasible to recover piecewise smooth signals from a relatively small number of DCT coefficients with high accuracy. Furthermore, automatic methods based on minimum description length principle and cross-validation are devised to select the polynomial orders, as a requirement of the inverse polynomial reconstruction method in practical applications. The developed framework can considerably enhance the performance of the DCT in sparse representation of piecewise smooth signals. Numerical results show that denoising and image approximation algorithms based on the proposed framework indicate significant improvements over wavelet counterparts for this class of signals.

EURASIP Journal on Advances in Signal Processing, 2005
Textural features are one of the most important types of useful information contained in images. ... more Textural features are one of the most important types of useful information contained in images. In practice, these features are commonly masked by noise. Relatively little attention has been paid to texture preserving properties of noise attenuation methods. This stimulates solving the following tasks: (1) to analyze the texture preservation properties of various filters; and (2) to design image processing methods capable to preserve texture features well and to effectively reduce noise. This paper deals with examining texture feature preserving properties of different filters. The study is performed for a set of texture samples and different noise variances. The locally adaptive three-state schemes are proposed for which texture is considered as a particular class. For "detection" of texture regions, several classifiers are proposed and analyzed. As shown, an appropriate trade-off of the designed filter properties is provided. This is demonstrated quantitatively for artificial test images and is confirmed visually for real-life images.
Communications in Information and Systems, 2003
We address the problem of constructing a fast lossless code in the case when the source alphabet ... more We address the problem of constructing a fast lossless code in the case when the source alphabet is large. The main idea of the new scheme may be described as follows. We group letters with small probabilities in subsets (acting as super letters) and use time consuming coding for these subsets only, whereas letters in the subsets have the same code length and therefore can be coded fast. The described scheme can be applied to sources with known and unknown statistics.
Circuits, Systems & Signal Processing, 2006
Abstract The problem of reconstructing a signal waveform when the observed realizations are corru... more Abstract The problem of reconstructing a signal waveform when the observed realizations are corrupted by intensive noise and random shifts is considered in this paper. Several ways of performing bispectrum filtering are proposed and investigated. First, it is shown that the ...
Circuits, Systems & Signal Processing, 2005
Circuits, Systems, and Signal Processing, 1996
Systematic methods are developed for simplified implementations of cascaded stack and WOS filters... more Systematic methods are developed for simplified implementations of cascaded stack and WOS filters. For recursive stack and WOS filters, corresponding nonrecursive implementations are given, with linear complexity with respect to the number of iterations. Dynamic domino logic is proposed for VLSI hardware implementation of positive Boolean functions, and a pipelined stack filter architecture is described.
users.soe.ucsc.edu
... Joint work with Kostadin Dabov, Aram Danielyan, Karen Egiazarian, Vladimir Katkovnik. ... E. ... more ... Joint work with Kostadin Dabov, Aram Danielyan, Karen Egiazarian, Vladimir Katkovnik. ... E. Vansteenkiste, D. Van der Weken, W. Philips, and E. Kerre, 0Perceived image quality measurement of stateMofMtheMart noise reduction schemes1, LNCS 4179 M ACIVS 2006, pp. ...
Applied Optics, 2008
A discrete di¤raction transform (DDT ) is a novel discrete wave-…eld propagation model which is a... more A discrete di¤raction transform (DDT ) is a novel discrete wave-…eld propagation model which is aliasing free for a pixel-wise invariant object distribution. For this class of the distributions the model is precise and has no typical discretization e¤ects because it corresponds to accurate calculation of the di¤raction integral. A spacial light modulator (SLM ) is a good example of the system where a pixel-wise invariant distribution appears. Frequency domain regularized inverse algorithms are developed for reconstruction of the object wave…eld distribution from the distribution given in the sensor plane. The e¢ ciency of developed frequency domain algorithms is demonstrated by simulation.
Uploads
Papers by Karen Egiazarian