Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2011
Figure 1. Overview of our method: from a 2D photo and their corresponding facial landmarks (a)-(b), the facial texture data is extracted by successive 2D triangular subdivisions (c)-(d), producing a new 3D face model (e)-(f).
The Visual Computer, 2009
In this paper, we present a 3D face photography system based on a facial expression training dataset, composed of both facial range images (3D geometry) and facial texture (2D photography). The proposed system allows one to obtain a 3D geometry representation of a given face provided as a 2D photography, which undergoes a series of transformations through the texture and geometry spaces estimated. In the training phase of the system, the facial landmarks are obtained by an active shape model (ASM) extracted from the 2D gray-level photography. Principal components analysis (PCA) is then used to represent the face dataset, thus defining an orthonormal basis of texture and another of geometry. In the reconstruction phase, an input is given by a face image to which the ASM is matched. The extracted facial landmarks and the face image are fed to the PCA basis transform, and a 3D version of the 2D input image is built. Experimental tests using a new dataset of 70 facial expressions belonging to ten subjects as training set show rapid reconstructed 3D faces which maintain spatial coherence similar to the human perception, thus corrobo-J.P. Mena-Chalco ( ) · R.M. Cesar Jr. rating the efficiency and the applicability of the proposed system.
Max-Planck-Institut fur bilogische Kybernetik, Technical Report, 1995
Human faces di er in shape and texture. This paper describes a representation of grey-level images of human faces based on an automated separation of two-dimensional shape and texture. The separations were done using the point correspondence between the di erent images, which w as established through algorithms known from optical ow computation. A linear description of the separated texture and shape spaces allows a smooth modeling of human faces. Images of faces along the principal axes of a small data set of 50 faces are shown. We also reconstruct images of faces using the 49 remaining faces in our data set. These reconstructions are the projections of an image into the space spanned by the textures and shapes of the other faces.
2008 XXI Brazilian Symposium on Computer Graphics and Image Processing, 2008
This paper presents a 3D face photography system based on a small set of training facial range images. The training set is composed by 2D texture and 3D range images (i.e. geometry) of a single subject with different facial expressions. The basic idea behind the method is to create texture and geometry spaces based on the training set and transformations to go from one space to the other. The main goal of proposed approach is to obtain a geometry representation of a given face provided as a texture image, which undergoes a series of transformations through the texture and geometry spaces. Facial feature points are obtained by an active shape model (ASM) extracted from the 2D gray-level images. PCA then is used to represent the face dataset, thus defining an orthonormal basis of texture and range data. An input face is given by a gray-level face image to which the ASM is matched. The extracted ASM is fed to the PCA basis representation and a 3D version of the 2D input image is built. The experimental results on static images and video sequences using seven samples as training dataset show rapid reconstructed 3D faces which maintain spatial coherence similar to the human perception, thus corroborating the efficiency of our approach.
Journal of Computers, 2007
One of the challenging problems in geometric modeling and computer graphics is the construction of realistic human facial geometry. Such geometry are essential for a wide range of applications, such as 3D face recognition, virtual reality applications, facial expression simulation and computer based plastic surgery application. This paper addresses a method for the construction of 3D geometry of human faces based on the use of Elliptic Partial Differential Equations (PDE). Here the geometry corresponding to a human face is treated as a set of surface patches, whereby each surface patch is represented using four boundary curves in the 3-space that formulate the appropriate boundary conditions for the chosen PDE. These boundary curves are extracted automatically using 3D data of human faces obtained using a 3D scanner. The solution of the PDE generates a continuous single surface patch describing the geometry of the original scanned data. In this study, through a number of experimental verifications we have shown the efficiency of the PDE based method for 3D facial surface reconstruction using scan data. In addition to this, we also show that our approach provides an efficient way of facial representation using a small set of parameters that could be utilized for efficient facial data storage and verification purposes.
International Journal of Advance Research in Computer Science & Software Engineering, 2014
Eigenface or Principal Components Analysis (PCA) methods have demonstrated their success in face recognition, detection and tracking. In this paper we have used this concept to reconstruct or represent a face as a linear combination of a set of basis images. The basis images are nothing but the eigenfaces. The idea is similar to represent a signal in the form of a linear combination of complex sinusoids called the Fourier Series. The main advantage is that the number of eigenfaces required is less than the number of face images in the database. Selection of number of eigefaces is important here. Here we investigate what is the number of minimum eigenface that is required for faithful production of a face image.
2000
A method of manifold representation for human faces with pose variations is proposed. Our model consists of mappings between 3D head angles and facial images separately represented in shape and texture, via sub-space models spanned by principal components (PCs). Explicit mappings to and from 3D head angles are used as processes of pose estimation and transformation, respectively. Generalization capability to unknown head poses enables our model to continuously cover pose parameter space, providing high approximation accuracy. The feasibility of this model is evaluated in a number of experiments. We also propose a novel pose-invariant face recognition system using our model as the entry format for a gallery of known persons. Experimental results with 3D facial models recorded by a Cyberware scanner show that our model provides a superior recognition performance against pose variations, and that texture synthesis process is carried out correctly.
Procedings of the British Machine Vision Conference 2016, 2016
We propose a Simplified Generic Elastic Model (S-GEM) which intends to construct a 3D face from a given 2D face image by making use of a set of general human traits viz., Gender, Ethnicity and Age (GEA). Different from the original GEM model which employs and deforms the mean depth value of 3D sample faces according to a specific 2D input face image, we hypothesise that the variations inherent on the depth information for individuals are significantly mitigated by narrowing down the target information via a selection of specific GEA traits. This is achieved by representing the unknown 3D facial feature points of a 2D input as a Gaussian Mixture Model (GMM) of that of the samples of its own GEA type. It is then further incorporated into a Bayesian framework whereby the 3D face reconstruction is posed as estimating the PCA coefficients of a statistical 3D face model, given the observation of 2D feature points, however, with their respective depth as hidden variables. By making the reasonable assumption that the support area of each component of GMM is small enough, the proposed method is reduced to choose the depth values of the features points of a sample face that is nearest to the 2D input face. Thus the 3D reconstruction is obtained with depth-value augmented feature points rather than the 2D ones in normal PCA statistical model based reconstruction. The proposed method has been tested with the USF 3D face database as well as the FRGC dataset. The experimental results show that the proposed S-GEM has achieved improved reconstruction accuracy, consistency, and the robustness over the conventional PCA based and the GEM (mean-face feature points) reconstruction, and also yields enhanced visual improvements on certain facial features.
Forensic Science International, 2014
Face authentication is a biometric classification method that verifies the identity of a user based on image of their face. Accuracy of the authentication is reduced when the pose, illumination and expression of the training face images is different than the testing image. The methods in this paper are designed to improve the accuracy of a features based face recognition system when the pose between the input image and training images are different. First, an efficient 2D-to-3D integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination; Then, realistic virtual faces with different of pose are synthesized based on the personalized 3D face to characterize the face subspace; Finally, face recognition is conducted based on these representative virtual faces. Compared with other related works, this framework has the following advantages: 1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; and, 2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex pose, illumination and expression. From the experimental results show that the propose method improve the accuracy of face recognition with variant pose, illumination and expression.
In its quest for more reliability and higher recognition rates the face recognition community has been focusing more and more on 3D based recognition. Depth information adds another dimension to facial features and provides ways to minimize the effects of pose and illumination variations for achieving greater recognition accuracy. This chapter reviews, therefore, the major techniques for 3D face modeling, the first step in any 3D assisted face recognition system. The reviewed techniques are laser range scans, 3D from structured light projection, stereo vision, morphing, shape from motion, shape from space carving, and shape from shading. Concepts, accuracy, feasibility, and limitations of these techniques and their effectiveness for 3D face recognition are discussed.
Lecture Notes in Computer Science, 2006
In this paper we focus on the problem of developing a coupled statistical model that can be used to recover surface height from frontal photographs of faces. The idea is to couple intensity and height by jointly modeling their combined variations. We perform Principal Component Analysis (PCA) on the shape coefficients for both intensity and height training data in order to construct the coupled statistical model. Using the best-fit coefficients of an intensity image, height information can be implicitly recovered through the coupled statistical model. Experiments show that the method can generate good approximations of the facial surface shape from out-of-training photographs of faces.
Journal of Global Research in Computer Sciences, 2011
To provide a comprehensive survey, we not only categorize existing modeling techniques but also present detailed descriptions of representative methods within each category. In addition, relevant topics such as biometric modalities, system evaluation, and issues of illumination and pose variation are covered. 3D models hold more information of the face, like surface information, that can be used for face recognition or subject discrimination. This paper, gives the survey based techniques or methods for 3D face modeling, in this paper first step namely Model Based Face Reconstruction, secondly Methods of 3d Face models divided into three parts Holistic matching methods, Feature-based (structural) matching methods, Hybrid methods thirdly Other methods categorized into again three parts 2D based class, 3D Based class and 2D+3D based class are discussed. There are two underlying motivations for us to write this survey paper: the first is to provide an up-to-date review of the existing l...
3D face reconstruction is considered to be a useful computer vision tool, though it is difficult to build. This paper proposes a 3D face reconstruction method, which is easy to implement and computationally efficient. It takes a single 2D image as input, and gives 3D reconstructed images as output. Our method primarily consists of three main steps: feature extraction, depth calculation, and creation of a 3D image from the processed image using a Basel face model (BFM). First, the features of a single 2D image are extracted using a two-step process. Before distinctive-features extraction, a face must be detected to confirm whether one is present in the input image or not. For this purpose, facial features like eyes, nose, and mouth are extracted. Then, distinctive features are mined by using scale-invariant feature transform (SIFT), which will be used for 3D face reconstruction at a later stage. Second step comprises of depth calculation, to assign the image a third dimension. Multivariate Gaussian distribution helps to find the third dimension, which is further tuned using shading cues that are obtained by the shape from shading (SFS) technique. Thirdly, the data obtained from the above two steps will be used to create a 3D image using BFM. The proposed method does not rely on multiple images, lightening the computation burden. Experiments were carried out on different 2D images to validate the proposed method and compared its performance to those of the latest approaches. Experiment results demonstrate that the proposed method is time efficient and robust in nature, and it outperformed all of the tested methods in terms of detail recovery and accuracy. INDEX TERMS 3D face reconstruction, feature extraction, facial modeling, gaussian distribution.
Engineering Applications of Artificial Intelligence, 2009
Applications related to game technology, law-enforcement, security, medicine or biometrics are becoming increasingly important, which, combined with the proliferation of three-dimensional (3D) scanning hardware, have made that 3D face recognition is now becoming a promising and feasible alternative to 2D face methods. The main advantage of 3D data, when compared with traditional 2D approaches, is that it provides information that is invariant to rigid geometric transformations and to pose and illumination conditions. One key element for any 3D face recognition system is the modeling of the available scanned data. This paper presents new 3D models for facial surface representation and evaluates them using two matching approaches: one based on Support Vector Machines and another one on Principal Component Analysis (with a Euclidean classifier). Also, two types of environments were tested in order to check the robustness of the proposed models: a controlled environment with respect to facial conditions (i.e. expressions, face rotations, etc) and a noncontrolled one (presenting face rotations and pronounced facial expressions). The recognition rates obtained using reduced spatial resolution representations (a 77.86 % for non-controlled environments and a 90.16% for controlled environments, respectively) show that the proposed models can be effectively used for practical face recognition applications.
This article presents the topic of Three Dimensional facial reconstruction approaches and some used methods. In this paper, we implement three-dimensional facial reconstruction algorithms based on various face databases using single image as an input and analyzing their performance on several aspects.Researchers proposed many applications for this issue, but most have their drawbacks and limitations. Secondly, we discuss about three-dimensional shapes and models based on facial techniques in detail. It concludes with an analysis of several implementations and with some technical discussions about 3D facial reconstruction
2008 Digital Image Computing: Techniques and Applications, 2008
This paper surveys the topic of 3D face reconstruction using 2D images from a computer science perspective. Various approaches have been proposed as solutions for this problem but most have their limitations and drawbacks. Shape from shading, Shape from silhouettes, Shape from motion and Analysis by synthesis using morphable models are currently regarded as the main methods of attaining the facial information for reconstruction of its 3D counterpart. Though this topic has gained a lot of importance and popularity, a fully accurate facial reconstruction mechanism has not yet being identified due to the complexity and ambiguity involved. This paper discusses about the general approaches of 3D face reconstruction and their drawbacks. It concludes with an analysis of several implementations and some speculations about the future of 3D face reconstruction.
Pattern Recognition, 2005
We present a fully automated algorithm for facial feature extraction and 3D face modeling from a pair of orthogonal frontal and profile view images of a person's face taken by calibrated cameras. The algorithm starts by automatically extracting corresponding 2D landmark facial features from both view images, then compute their 3D coordinates. Further, we estimate the coordinates of the features that are hidden in the profile view based on the visible features extracted in the two orthogonal face images. The 3D coordinates of the selected feature points obtained from the images are used first to align, then to locally deform the corresponding facial vertices of the generic 3D model. Preliminary experiments to assess the applicability of the resulted models for face recognition show encouraging results.
International Journal of Image and Graphics, 2009
The use of 3D data in face image processing applications has received considerable attention during the last few years. A major issue for the implementation of 3D face processing systems is the accurate and real time acquisition of 3D faces using low cost equipment. In this paper we provide a survey of 3D reconstruction methods used for generating the 3D appearance of a face using either a single or multiple 2D images captured with ordinary equipment such as digital cameras and camcorders. In this context we discuss various issues pertaining to the general problem of 3D face reconstruction such as the existence of suitable 3D face databases, correspondence of 3D faces, feature detection, deformable 3D models and typical assumptions used during the reconstruction process. Different approaches to the problem of 3D reconstruction are presented and for each category the most important advantages and disadvantages are outlined. In particular we describe example-based methods, stereo methods, video-based methods and silhouette-based methods. The issue of performance evaluation of 3D face reconstruction algorithms, the state of the art and future trends are also discussed.
14th International Conference on Image Analysis and Processing (ICIAP 2007), 2007
We propose a method for reconstructing 3D face shape from a camera, which captures the object face from various viewing angles. In this method, we do not directly reconstruct the shape, but estimate a small number of parameters which represent the face shape. The parameter space is constructed with Principal Component Analysis of database of a large number of face shapes collected for different people. By the PCA, the parameter space can represent the shape difference for the faces of various persons. From the input image sequence that is captured by the moving camera, the parameters of the object face can be estimated based on optimization framework. The experiments based on the proposed method demonstrate that the proposed method can reconstruct the facial shape with the accuracy of 2.5mm averaged error.
As the 3D human face reconstruction is becoming very popular in recent times, it attracts many researchers. Construction of 3D human face using only two orthogonal images and twelve landmark features are the main context of the proposed approach. For3D object modeling, the Open Graphics Library (OpenGL) is used as the platform through which modeling, modification and rendering is performed on the morphable model. The proposed approach trails through semi-automatic identification of the facial landmark features, calculation of the 3D coordinates of human face, morphable model construction in OpenGL, reshaping of the morphable model and rendering of the morphable model. The facial landmark identification is semiautomatic method, as the module requires the manual interaction for marking the facial landmarks on the image. The reshaping of morphable model is required as the morphable model does not fit to the actual face in most of the cases. The morphable model is reshaped by calculating root mean square (RMS) error of face coordinates. The rendering process does not require a wide screen image because the approach performs rendering using input front face image and side face image as textures. Applications of this research help to overcome the disputes of fields like crime detection, 3D game characterization, the ornaments exhibition and in the area of medical technology like plastic surgery.
2004
An analysis-by-synthesis framework for face recognition with variant pose, illumination and expression (PIE) is proposed in this paper. First, an efficient 2D-to-3D integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination; Then, realistic virtual faces with different PIE are synthesized based on the personalized 3D face to characterize the face subspace; Finally, face recognition is conducted based on these representative virtual faces. Compared with other related works, this framework has the following advantages: 1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; 2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex PIE; and 3) the proposed 2D-to-3D integrated face reconstruction approach is fully automatic and more efficient. The extensive experimental results show that the synthesized virtual faces significantly improve the accuracy of face recognition with variant PIE.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.