Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2008, International Journal of Computer Vision
…
18 pages
1 file
Complex reflectance phenomena such as specular reflections confound many vision problems since they produce image 'features' that do not correspond directly to intrinsic surface properties such as shape and spectral reflectance. A common approach to mitigate these effects is to explore functions of an image that are invariant to these photometric events. In this paper we describe a class of such invariants that result from exploiting color information in images of dichromatic surfaces. These invariants are derived from illuminant-dependent 'subspaces' of RGB color space, and they enable the application of Lambertian-based vision techniques to a broad class of specular, non-Lambertian scenes. Using implementations of recent algorithms taken from the literature, we demonstrate the practical utility of these invariants for a wide variety of applications, including stereo, shape from shading, photometric stereo, materialbased segmentation, and motion estimation.
2005
We present a photometric stereo method for non-diffuse materials that does not require an explicit reflectance model or reference object. By computing a data-dependent rotation of RGB color space, we show that the specular reflection effects can be separated from the much simpler, diffuse (approximately Lambertian) reflection effects for surfaces that can be modeled with dichromatic reflectance. Images in this transformed color space are used to obtain photometric reconstructions that are independent of the specular reflectance. In contrast to other methods for highlight removal based on dichromatic color separation (e.g., color histogram analysis and/or polarization), we do not explicitly recover the specular and diffuse components of an image. Instead, we simply find a transformation of color space that yields more direct access to shape information. The method is purely local and is able to handle surfaces with arbitrary texture.
2003
We derive a new class of photometric invariants that can be used for a variety of vision tasks including lighting invariant material segmentation, change detection and tracking, as well as material invariant shape recognition. The key idea is the formulation of a scene radiance model for the class of "separable" BRDFs, that can be decomposed into material related terms and object shape and lighting related terms. All the proposed invariants are simple rational functions of the appearance parameters (say, material or shape and lighting). The invariants in this class differ from one another in the number and type of image measurements they require. Most of the invariants in this class need changes in illumination or object position between image acquisitions. The invariants can handle large changes in lighting which pose problems for most existing vision algorithms. We demonstrate the power of these invariants using scenes with complex shapes, materials, textures, shadows and specularities.
Journal of the Optical Society of America A, 2000
Photometric stereo is a well-known technique for recovering surface normals of a surface but requires three or more images of a surface taken under illumination from different directions. At best, one may dispense with the need for multiple images by using colored lights tuned to camera filters. But a less restrictive paradigm is available using the Orientation-from-Color approach, wherein multiple broadband illuminants impinge on a surface simultaneously. In that method, colors for a Lambertian surface lie on an ellipsoid in color space. The method has mostly been applied to single-color objects, with ellipsoid quadratic form parameters determined from a large number of pixels. However, recently Petrov et al. developed an entirely local approach, useful also for multicolored objects with color uniform in each patch. Here we investigate to what extent a method like Petrov's can be applied in the ostensibly simpler situation in which the complex lighting environment is known, i.e. a color photometric stereo situation, with all lights at play at once with only a single image to analyze. We find that, assuming a simple model of color formation, we are able to recover the object colors along with surface normals, using only a single image. Because we immerse the object in a known lighting environment, we show that only half of the equations utilized by Petrov are actually needed, making the method more stable. Nevertheless solutions do not exist at every pixel; instead we may determine a best estimate of patch color using a robust estimator, and then apply that estimate throughout a patch. Results are shown to be quite good, compared to ground truth. The simple color model can often be made to hold more exactly by transforming the color space to one corresponding to spectrally sharpened sensors, which are a matrix transform away from the actual camera sensors. Here, the reliability and accuracy of the normal vector and surface color recovery algorithm is improved by this straightforward transformation.
Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662), 2000
We consider the problem of determining functions of an image of an object that are insensitive to illumination changes. We rst show that for an object with Lambertian re ectance there are no discriminative functions that are invariant to illumination. We do this by showing that given any two images, one can construct a single Lambertian object that can produce both images under two very simple lighting conditions. This result leads us to adopt a probabilistic approach in which we analytically determine a probability distribution for the image gradient as a function of the surface's geometry and re ectance. Our distribution reveals that the direction of the image gradient is insensitive to changes in illumination direction. We verify this empirically by constructing a distribution for the image gradient from more than 20 million samples of gradients in a database of 1,280 images of 20 inanimate objects taken under varying lighting conditions. Using this distribution, we develop an illumination insensitive measure of image comparison and test it on the problem of face recognition.
Lecture Notes in Computer Science, 2012
This paper proposes a method for illumination-invariant representation of natural color images. The invariant representation is derived, not using spectral reflectance, but using only RGB camera outputs. We suppose that the materials of target objects are composed of dielectric or metal, and the surfaces include illumination effects such as highlight, gloss, or specularity. We preset the procedure for realizing the invariant representation in three steps: (1) detection of specular highlight, (2) illumination color estimation, and (3) invariant representation for reflectance color. The performance of the proposed method is examined in experiments using real-world objects including metals and dielectrics in detail. The limitation of the method is also discussed. Finally, the proposed representation is applied to the edge detection problem of natural color images.
2003
Illumination color determines the color appearance of an object. When illumination color changes, the color appearance of the object will change accordingly, causing its appearance to be inconsistent. Many methods have been proposed to solve this problem. However, a few researchers found that despite causing the inconsistency problem, the change of illumination color can produce a crucial constraint that can solve the problem itself. Finlayson et al.
This paper presents a novel approach for recovering the shape of non-Lambertian, multicolored objects using two input images. We show that a ring light source with complementary colored lights has the potential to be effectively utilized for this purpose. Under this lighting, the brightness of an object surface varies with respect to different reflections. Therefore, analyzing how brightness is modulated by illumination color gives us distinct cues to recover shape. Moreover, the use of complementary colored illumination enables the color photometric stereo to be applicable to multicolored surfaces. Here, we propose a color correction method based on the addition principle of complementary colors to remove the effect of illumination from the observed color. This allows the inclusion of surfaces with any number of chromaticities. Therefore, our method offers significant advantages over previous methods, which often assume constant object albedo and Lambertian reflectance. To the best of our knowledge, this is the first attempt to employ complementary colors on a ring light source to compute shape while considering both non-Lambertian reflection and spatially varying albedo. To show the efficacy of our method, we present results on synthetic and real world images and compare against photometric stereo methods elsewhere in the literature.
2017
Illumination factors such as shading, shadow, and highlight observed from object surfaces affect the appearance and analysis of natural color images. Invariant representations to these factors were presented in several ways. Most of these methods used the standard dichromatic reflection model that assumed inhomogeneous dielectric material. The standard model cannot describe metallic objects. This chapter introduces an illumination-invariant representation that is derived from the standard dichromatic reflection model for inhomogeneous dielectric and the extended dichromatic reflection model for homogeneous metal. The illumination color is estimated from two inhomogeneous surfaces to recover the surface reflectance of object without using a reference white standard. The overall performance of the invariant representation is examined in experiments using real-world objects including metals and dielectrics in detail. The feasibility of the representation for effective edge detection is...
2004
Luminance-based features are widely used as low-level input for computer vision applications, even when color data is available. Extension of feature detection to the color domain prevents information loss due to isoluminance and allows to exploit the photometric information. To fully exploit the extra information in the color data, the vector nature of color data has to be taken into account and a sound framework is needed to combine feature and photometric invariance theory. In this paper we focus on the structure tensor, or color tensor, which adequately handles the vector nature of color images. Further, we combine the features based on the color tensor with photometric invariant derivatives to arrive at photometric invariant features. We circumvent the drawback of unstable photometric invariants by deriving an uncertainty measure to accompany the photometric invariant derivatives. The uncertainty is incorporated in the color tensor, hereby allowing the computation of robust photometric invariant features. The combination of the photometric invariance theory and tensorbased features allows for detection of a variety of features such as photometric invariant edges, corners, optical flow and curvature. The proposed features are tested for noise characteristics and robustness to photometric changes. Experiments show that the proposed features are robust to scene incidental events and that the proposed uncertainty measure improves the applicability of full invariants.
1999
Although photometric data is a readily available dense source of information in intensity images, it is not widely used in computer vision. A major drawback is its dependence on viewpoint and incident illumination. A novel methodology is presented which extracts reflectivity information of the various materials in the scene independent of incident light and scene geometry. A scene is captured under three different narrow-band color filters and the spectral derivatives of the scene are computed. The resulting spectral derivatives form a spectral gradient at each pixel. This spectral gradient is a surface reflectance descriptor which is invariant to scene geometry and incident illumination for smooth diffuse surfaces. The invariant properties of the spectral gradients make them a particularly appealing tool in many diverse areas of computer vision such as color constancy, tracking, scene classification, material classification, stereo correspondence, even re-illumination of a scene.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003
IEEE Transactions on Image Processing, 2000
Computational Visual Media
Procedings of the British Machine Vision Conference 1994, 1994
2008 IEEE Conference on Computer Vision and Pattern Recognition, 2008
Computer Vision and Pattern Recognition, 2009
Pattern Recognition, 2012
Journal of The Optical Society of America A-optics Image Science and Vision, 1986
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000
Proceedings Ninth IEEE International Conference on Computer Vision, 2003
Lecture Notes in Computer Science, 2010
International Journal of Computer Vision, 2006