Papers by Patrick Le Callet
Human Vision and Electronic Imaging XIV, 2009
Motion-blur is still an important issue on liquid crystal displays (LCD). In the last years, effo... more Motion-blur is still an important issue on liquid crystal displays (LCD). In the last years, efforts have been done in the characterization and the measurement of this artifact. These methods permit to picture the blurred profile of a moving edge, according to the scrolling speed and to the gray-to-gray transition considered. However, other aspects should be taken in account in order to understand the way LCD motion-blur is perceived.

Viewing 3D content on an autostereoscopic is an exciting experience. This is partly due to the fa... more Viewing 3D content on an autostereoscopic is an exciting experience. This is partly due to the fact that the 3D effect is seen without glasses. Nevertheless, it is an unnatural condition for the eyes as the depth effect is created by the disparity of the left and the right view on a flat screen instead of having a real object at the corresponding location. Thus, it may be more tiring to watch 3D than 2D. This question is investigated in this contribution by a subjective experiment. A search task experiment is conducted and the behavior of the participants is recorded with an eyetracker. Several indicators both for low level perception as well as for the task performance itself are evaluated. In addition two optometric tests are performed. A verification session with conventional 2D viewing is included. The results are discussed in detail and it can be concluded that the 3D viewing does not have a negative impact on the task performance used in the experiment.
Existing video quality metrics do usually not take into consideration that spatial regions in vid... more Existing video quality metrics do usually not take into consideration that spatial regions in video frames are of varying saliency and thus, differently attract the viewer's attention. This paper proposes a model of saliency awareness to complement existing video quality metrics, with the aim to improve the agreement of objectively predicted quality with subjectively rated quality. For this purpose, we conducted a subjective experiment in which human observers rated the annoyance of videos with transmission distortions appearing either in a salient region or in a non-salient region. The mean opinion scores con rm that distortions in salient regions are perceived much more annoying. It is shown that application of the saliency awareness model to two video quality metrics considerably improves their quality prediction performance.
Various recent studies have shown that variations in observers' color vision characteristics can

CIE 2006 model presents a convenient framework for calculating the cone fundamentals, and thus th... more CIE 2006 model presents a convenient framework for calculating the cone fundamentals, and thus the color matching functions, for various ages of an average observer. CIE 2006 model incorporates three major physiological factors affecting observer variability, namely optical densities for the ocular media absorption, macular pigment absorption, and visual pigments in the outer segments of the photoreceptors. However, it does not have a provision for a peak-wavelength shift in the photopigment absorption spectra, a significant but difficult-to-model contributor to observer variability. In the context of color reproduction with modern emissive displays based on different technologies, we performed a theoretical analysis of various aspects of CIE 2006. Out of the above four factors, L-cone peak-wavelength shift was found to be the second most significant contributor to observer variability, after ocular media absorption. Excluding this factor from the CIE 2006 model, while understandable, can undermine the usefulness of age-dependent observer color matching functions.
One of the basic tenets of conventional applied colorimetry is that the whole population of color... more One of the basic tenets of conventional applied colorimetry is that the whole population of color normal observers can be represented by a single "standard" observer with reasonable accuracy. The 1964 CIE standard colorimetric observer has indeed served us well in all industrial color imaging applications, until recently. With the proliferation of modern wide-gamut displays with narrow-band primaries, color scientists and engineers face a new challenge. Various recent studies, including those by the current authors, have shown that the color perception on such displays varies significantly among color normal observers. Conventional colorimetry has no means to predict this variation. In this paper, we explore this problem by summarizing the results from an ongoing study, and explain the practical significance of this issue in the context of display applications.

CIE 2006 model presents a convenient framework for calculating the cone fundamentals, and thus th... more CIE 2006 model presents a convenient framework for calculating the cone fundamentals, and thus the color matching functions, for various ages of an average observer. CIE 2006 model incorporates three major physiological factors affecting observer variability, namely optical densities for the ocular media absorption, macular pigment absorption, and visual pigments in the outer segments of the photoreceptors. However, it does not have a provision for a peak-wavelength shift in the photopigment absorption spectra, a significant but difficult-to-model contributor to observer variability. In the context of color reproduction with modern emissive displays based on different technologies, we performed a theoretical analysis of various aspects of CIE 2006. Out of the above four factors, L-cone peak-wavelength shift was found to be the second most significant contributor to observer variability, after ocular media absorption. Excluding this factor from the CIE 2006 model, while understandable, can undermine the usefulness of age-dependent observer color matching functions.

The variability among color-normal observers poses a challenge to modern display colorimetry beca... more The variability among color-normal observers poses a challenge to modern display colorimetry because of their peaky primaries. But such devices also hold the key to a future solution to this issue. In this paper, we present a method for deriving seven distinct colorimetric observer categories, and also a method for classifying individual observers as belonging to one of these seven categories. Five representative L, M and S cone fundamentals (a total of 125 combinations) were derived through a cluster analysis on the combined set of 47-observer data from 1959 Stiles-Burch study, and 61 color matching functions derived from the CIE 2006 model corresponding to 20-80 age parameter range. From these, a reduced set of seven representative observers were derived through an iterative algorithm, using several predefined criteria on perceptual color differences (delta E*00) with respect to actual color matching functions of the 47 Stiles-Burch observers, computed for the 240 Colorchecker samples viewed under D65 illumination. Next, an observer classification method was implemented using two displays, one with broad-band primaries and the other with narrow-band primaries. In paired presentations on the two displays, eight color-matches corresponding to the CIE 10° standard observer and the seven observer categories were shown in random sequences. Thirty observers evaluated all eight versions of fifteen test colors. For majority of the observers, only one or two categories consistently produced either acceptable or satisfactory matches for all colors. The CIE 10° standard observer was never selected as the most preferred category for any observer, and for six observers, it was rejected as an unacceptable match for more than 50% of the test colors. The results show that it is possible to effectively classify real, color-normal observers into a small number of categories, which in certain application contexts, can produce perceptibly better color matches for many observers compared to the matches predicted by the CIE 10° standard observer.

Various recent studies have shown that observer variability can be a significant issue in modern ... more Various recent studies have shown that observer variability can be a significant issue in modern display colorimetry, since narrow-band primaries are often used to achieve wider color gamuts. As far as industrial applications are concerned, past works on various aspects of observer variability and metamerism have mostly focused on cross-media color matching, an application context that is different from color matching on two displays, both in terms of human visual performance and the application requirements. In this paper, we report a set of three preliminary color matching experiments using a studio Cathode Ray Tube (CRT) display with broadband primaries, and a modern wide-color gamut Liquid Crystal Display (LCD) with narrow-band primaries, with and without surround. Two principal goals of these pilot tests are to validate the experimental protocol, and to obtain a first set of metameric data of display color matches under different viewing conditions. In this paper, various experimental design considerations leading to the current test setup are discussed, and the results from the pilot tests are presented. We confirm the validity of our test setup, and show that the average color matches predicted by the 1964 CIE 10° standard observer, although acceptable as average matches, can often be significantly and unacceptably different from individual observer color matches. The mean, maximum and the 90 th percentile values of the standard observer-predicted color difference of individual observer color matches were 1.4, 3.3 and 2.6 ∆E* 00 respectively.

CIE 2006 model presents a convenient framework for calculating the cone fundamentals, and thus th... more CIE 2006 model presents a convenient framework for calculating the cone fundamentals, and thus the color matching functions, for various ages of an average observer. CIE 2006 model incorporates three major physiological factors affecting observer variability, namely optical densities for the ocular media absorption, macular pigment absorption, and visual pigments in the outer segments of the photoreceptors. However, it does not have a provision for a peak-wavelength shift in the photopigment absorption spectra, a significant but difficult-to-model contributor to observer variability. In the context of color reproduction with modern emissive displays based on different technologies, we performed a theoretical analysis of various aspects of CIE 2006. Out of the above four factors, L-cone peak-wavelength shift was found to be the second most significant contributor to observer variability, after ocular media absorption. Excluding this factor from the CIE 2006 model, while understandable, can undermine the usefulness of age-dependent observer color matching functions.

Lecture Notes in Computer Science, 2009
From moonlight to bright sun shine, real world visual scenes contain a very wide range of luminan... more From moonlight to bright sun shine, real world visual scenes contain a very wide range of luminance; they are said to be High Dynamic Range (HDR). Our visual system is well adapted to explore and analyze such a variable visual content. It is now possible to acquire such HDR contents with digital cameras; however it is not possible to render them all on standard displays, which have only Low Dynamic Range (LDR) capabilities. This rendering usually generates bad exposure or loss of information. It is necessary to develop locally adaptive Tone Mapping Operators (TMO) to compress a HDR content to a LDR one and keep as much information as possible. The human retina is known to perform such a task to overcome the limited range of values which can be coded by neurons. The purpose of this paper is to present a TMO inspired from the retina properties. The presented biological model allows reliable dynamic range compression with natural color constancy properties. Moreover, its non-separable spatio-temporal filter enhances HDR video content processing with an added temporal constancy.
N}ous effectuons actuellement des travaux de recherche portant sur la perception des d{\'e}gr... more N}ous effectuons actuellement des travaux de recherche portant sur la perception des d{\'e}gradations en {TVHD} et leur impact sur la qualit{\'e}. {P}our le moment, aucun protocole d'{\'e}valuation subjective de qualit{\'e} sp{\'e}cifique {\`a} la {TVHD} n'est normalis{\'e}. {L}a d{\'e}finition des conditions n{\'e}cessaires {\`a} ces tests n'a pas encore {\'e}t{\'e} finalis{\'e}e par le groupe de recherche {VQEG}, auquel nous participons. {N}ous avons
Visual Communications and Image Processing 2010, 2010
In this paper, distortions caused by packet loss during video transmission are evaluated with res... more In this paper, distortions caused by packet loss during video transmission are evaluated with respect to their perceived annoyance. In this respect, the impact of visual saliency on the level of annoyance is of particular interest, as regions and objects in a video frame are typically not of equal importance to the viewer. For this purpose, gaze patterns from a

Human Vision and Electronic Imaging XIV, 2009
Present quality assessment (QA) algorithms aim to generate scores for natural images consistent w... more Present quality assessment (QA) algorithms aim to generate scores for natural images consistent with subjective scores for the quality assessment task. For the quality assessment task, human observers evaluate a natural image based on its perceptual resemblance to a reference. Natural images communicate useful information to humans, and this paper investigates the utility assessment task, where human observers evaluate the usefulness of a natural image as a surrogate for a reference. Current QA algorithms implicitly assess utility insofar as an image that exhibits strong perceptual resemblance to a reference is also of high utility. However, a perceived quality score is not a proxy for a perceived utility score: a decrease in perceived quality may not affect the perceived utility. Two experiments are conducted to investigate the relationship between the quality assessment and utility assessment tasks. The results from these experiments provide evidence that any algorithm optimized to predict perceived quality scores cannot immediately predict perceived utility scores. Several QA algorithms are evaluated in terms of their ability to predict subjective scores for the quality and utility assessment tasks. Among the QA algorithms evaluated, the visual information fidelity (VIF) criterion, which is frequently reported to provide the highest correlation with perceived quality, predicted both perceived quality and utility scores reasonably. The consistent performance of VIF for both the tasks raised suspicions in light of the evidence from the psychophysical experiments. A thorough analysis of VIF revealed that it artificially emphasizes evaluations at finer image scales (i.e., higher spatial frequencies) over those at coarser image scales (i.e., lower spatial frequencies). A modified implementation of VIF, denoted VIF*, is presented that provides statistically significant improvement over VIF for the quality assessment task and statistically worse performance for the utility assessment task. A novel utility assessment algorithm, referred to as the natural image contour evaluation (NICE), is introduced that conducts a comparison of the contours of a test image to those of a reference image across multiple image scales to score the test image. NICE demonstrates a viable departure from traditional QA algorithms that incorporate energy-based approaches and is capable of predicting perceived utility scores.
Although several metrics have been proposed in literature to assess the perceptual quality of bid... more Although several metrics have been proposed in literature to assess the perceptual quality of bidimensional images, no similar effort has been devoted to quality assessment of stereoscopic images. Therefore, in this paper, we propose a methodology for subjective assessment of stereo images. Moreover, in the process of defining an objective metric specifically designed for stereoscopic images, we evaluate whether 2-D image quality objective metrics are also suited for quality assessment of stereo images. Specifically, distortions deriving from both coding and blurring are taken into account and the quality degradation of the stereo pair is estimated.
2009 16th IEEE International Conference on Image Processing (ICIP), 2009
Packets in a video bitstream contain data with different levels of importance that yield unequal ... more Packets in a video bitstream contain data with different levels of importance that yield unequal amounts of quality distortion when lost. In order to avoid sharp quality degradation due to packet loss, we propose in this paper an error resilience method that is applied to the Region of Interest (RoI) of the picture. This method protects the RoI while not yielding significant overhead. We perform an eye tracking test to determine the RoIs of a video sequence and we assess the performance of the proposed model in error-prone environments by means of a subjective quality test. Loss simulation results show that stopping the temporal error propagation in the RoIs of the pictures helps preserving an acceptable visual quality in the presence of packet loss.

Human Vision and Electronic Imaging XV, 2010
The merit of an objective quality estimator for either still images or video is gauged by its abi... more The merit of an objective quality estimator for either still images or video is gauged by its ability to accurately estimate the perceived quality scores of a collection of stimuli. Encounters with radically different distortion types that arise in novel media representations require that researchers collect perceived quality scores representative of these new distortions to confidently evaluate a candidate objective quality estimator. Two common methods used to collect perceived quality scores are absolute categorical rating (ACR)1 and subjective assessment for video quality (SAMVIQ).2, 3 The choice of a particular test method affects the accuracy and reliability of the data collected. An awareness of the potential benefits and/or costs attributed to the ACR and SAMVIQ test methods can guide researchers to choose the more suitable method for a particular application. This paper investigates the tradeoffs of these two subjective testing methods using three different subjective databases that have scores corresponding to each method. The subjective databases contain either still-images or video sequences. This paper has the following organization: Section 2 summarizes the two test methods compared in this paper, ACR and SAMVIQ. Section 3 summarizes the content of the three subjective databases used to evaluate the two test methods. An analysis of the ACR and SAMVIQ test methods is presented in Section 4. Section 5 concludes this paper. Bibtex entry for this abstract Preferred format for this abstract (see Preferences) Find Similar Abstracts: Use: Authors Title Abstract Text Return: Query Results Return items starting with number Query Form Database: Astronomy Physics arXiv e-prints
2009 17th International Packet Video Workshop, 2009
Video transmission over the Internet can sometimes be subject to packet loss which reduces the en... more Video transmission over the Internet can sometimes be subject to packet loss which reduces the end-user's Quality of Experience (QoE). Solutions aiming at improving the robustness of a video bitstream can be used to subdue this problem. In this paper, we propose a new Region of Interest-based error resilience model to protect the most important part of the picture from distortions. We conduct eye tracking tests in order to collect the Region of Interest (RoI) data. Then, we apply in the encoder an intra-prediction restriction algorithm to the macroblocks belonging to the RoI. Results show that while no significant overhead is noted, the perceived quality of the video's RoI, measured by means of a perceptual video quality metric, increases in the presence of packet loss compared to the traditional encoding approach.
The primary goal of this study is to find a measurement method for motion blur which is easy to c... more The primary goal of this study is to find a measurement method for motion blur which is easy to carry out and gives results that can be reproduced from one lab to another. This method should be able to also take into account methods for reduction of motion blur such as backlight flashing. Two methods have been compared. The first method uses a high-speed camera that permits us to directly picture the blurred-edge profile. The second one exploits the mathematical analysis of the motion-blur formation to construct the blurred-edge profile from the temporal step response. Measurement results and method proposals are given and discussed.
SID Symposium Digest of Technical Papers, 2009
It has been recognized for some time now that LCD displays will introduce blur when showing movin... more It has been recognized for some time now that LCD displays will introduce blur when showing moving objects or moving images. Common motion-blur measurement methods permit to picture the blurred profile of an edge moving with a constant velocity. A normalized blurred edge width is then measured for several grayto-gray transitions to give a motion-blur score of the display under test. However, these objective measurements are partly based on the behavior of the human visual system and it is an open question how well they correlate with subjective experience of observers. In this study, we develop a subjective experiment in order to assess the annoyance and the acceptance of motion-blur. Results are given and compare with measurements data.
Uploads
Papers by Patrick Le Callet