Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2004
An analysis-by-synthesis framework for face recognition with variant pose, illumination and expression (PIE) is proposed in this paper. First, an efficient 2D-to-3D integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination; Then, realistic virtual faces with different PIE are synthesized based on the personalized 3D face to characterize the face subspace; Finally, face recognition is conducted based on these representative virtual faces. Compared with other related works, this framework has the following advantages: 1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; 2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex PIE; and 3) the proposed 2D-to-3D integrated face reconstruction approach is fully automatic and more efficient. The extensive experimental results show that the synthesized virtual faces significantly improve the accuracy of face recognition with variant PIE.
International Journal of Digital Content Technology and its Applications, 2009
This paper proposes an analysis-by-synthesis framework for face recognition with variant pose, illumination and expression. First, an efficient 2D-to-3D integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination; Then, realistic virtual faces with different of pose, illumination and expression are synthesized based on the personalized 3D face to characterize the face subspace; Finally, face recognition is conducted based on these representative virtual faces. Compared with other related works, this framework has the following advantages: 1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; 2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex pose, illumination and expression; and 3) the proposed 2D-to-3D integrated face reconstruction approach is fully automatic and more efficient. From the experimental results show that the synthesized virtual faces significantly improve the accuracy of face recognition with variant pose, illumination and expression.
Lecture Notes in Computer Science, 2004
Current appearance-based face recognition system encounters the difficulty to recognize faces with appearance variations, while only a small number of training images are available. We present a scheme based on the analysis by synthesis framework. A 3D generic face model is aligned onto a given frontal face image. A number of synthetic face images are generated with appearance variations from the aligned 3D face model. These synthesized images are used to construct an affine subspace for each subject. Training and test images for each subject are represented in the same way in such a subspace. Face recognition is achieved by minimizing the distance between the subspace of a test subject and that of each subject in the database. Only a single face image of each subject is available for training in our experiments. Preliminary experimental results are promising.
2012
Face authentication is a biometric classification method that verifies the identity of a user based on image of their face. Accuracy of the authentication is reduced when the pose, illumination and expression of the training face images is different than the testing image. The methods in this paper are designed to improve the accuracy of a features based face recognition system when the pose between the input image and training images are different. First, an efficient 2D-to-3D integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination; Then, realistic virtual faces with different of pose are synthesized based on the personalized 3D face to characterize the face subspace; Finally, face recognition is conducted based on these representative virtual faces. Compared with other related works, this framework has the following advantages: 1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; and, 2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex pose, illumination and expression. From the experimental results show that the propose method improve the accuracy of face recognition with variant pose, illumination and expression.
Forensic Science International, 2014
Face authentication is a biometric classification method that verifies the identity of a user based on image of their face. Accuracy of the authentication is reduced when the pose, illumination and expression of the training face images is different than the testing image. The methods in this paper are designed to improve the accuracy of a features based face recognition system when the pose between the input image and training images are different. First, an efficient 2D-to-3D integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination; Then, realistic virtual faces with different of pose are synthesized based on the personalized 3D face to characterize the face subspace; Finally, face recognition is conducted based on these representative virtual faces. Compared with other related works, this framework has the following advantages: 1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; and, 2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex pose, illumination and expression. From the experimental results show that the propose method improve the accuracy of face recognition with variant pose, illumination and expression.
IEEE Transactions on Information Forensics and Security, 2014
One of the most critical sources of variation in face recognition is facial expressions, especially in the frequent case where only a single sample per person is available for enrollment. Methods that improve the accuracy in the presence of such variations are still required for a reliable authentication system. In this paper, we address this problem with an analysis-bysynthesis based scheme, in which a number of synthetic face images with different expressions are produced. For this purpose, an animatable 3D model is generated for each user based on 17 automatically located landmark points. The contribution of these additional images in terms of the recognition performance is evaluated with 3 different techniques (PCA, LDA and LBP) on FRGC and BOSPHORUS 3D face databases. Significant improvements are achieved in face recognition accuracies, for each database and algorithm.
In its quest for more reliability and higher recognition rates the face recognition community has been focusing more and more on 3D based recognition. Depth information adds another dimension to facial features and provides ways to minimize the effects of pose and illumination variations for achieving greater recognition accuracy. This chapter reviews, therefore, the major techniques for 3D face modeling, the first step in any 3D assisted face recognition system. The reviewed techniques are laser range scans, 3D from structured light projection, stereo vision, morphing, shape from motion, shape from space carving, and shape from shading. Concepts, accuracy, feasibility, and limitations of these techniques and their effectiveness for 3D face recognition are discussed.
2015
Face Recognition is one of the popular biometric authentication system but the face recognition with the different illumination, pose and expression variations is a challenging issue so to improve the accuracy in such variations is addressed by an Analysis-By-Synthesis-Based scheme, where the expression simulations has been done for the 40 subjects of 400 images. The Fast Bounding Box Algorithm is used for face recognition. The use of fast bounding box algorithm has improved the accuracy and yields better robustness of face recognition.
Biometrics is the area of bioengineering that pursues the characterization of individuals in a population (e.g., a particular person) by means of something that the individual is or produces. Among the different modalities in biometrics, face recognition has been a focus in research for the last couple of decades because of its wide potential applications and its importance to meet the security needs of today's world. Most of the systems developed are based on 2D face recognition technology, which uses pictures for data processing. With the development of 3D imaging technology, 3D face recognition emerges as an alternative to overcome the difficulties inherent to 2D face recognition, i.e. sensitivity to illumination conditions and positions of a subject. But 3D face recognition still needs to tackle the problem of deformation of facial geometry that results from the expression changes of a subject. To deal with this issue, a 3D face recognition framework is proposed in this paper. It is composed of three subsystems: expression recognition system, expressional face recognition system and neutral face recognition system. A system for the recognition of faces with one type of expression (smile) and neutral faces was implemented and tested on a database of 30 subjects. The results proved the feasibility of this framework.
2011
This paper deals with one sample face recognition which is a new challenging problem in pattern recognition. In the proposed method, the frontal 2D face image of each person is divided to some sub-regions. After computing the 3D shape of each sub-region, a fusion scheme is applied on them to create the total 3D shape of whole face image. Then, 2D face image is draped over the corresponding 3D shape to construct 3D face image. Finally by rotating the 3D face image, virtual samples with different views are generated. Experimental results on ORL dataset using nearest neighbor as classifier reveal an improvement about 5% in recognition rate for one sample per person by enlarging training set using generated virtual samples. Compared with other related works, the proposed method has the following advantages: 1) only one single frontal face is required for face recognition and the outputs are virtual images with variant views for each individual 2) it requires only 3 key points of face (eyes and nose) 3) 3D shape estimation for generating virtual samples is fully automatic and faster than other 3D reconstruction approaches 4) it is fully mathematical with no training phase and the estimated 3D model is unique for each individual.
Proceedings of the IEEE, 2006
Unconstrained illumination and pose variation lead to significant variation in the photographs of faces and constitute a major hurdle preventing the widespread use of face recognition systems. The challenge is to generalize from a limited number of images of an individual to a broad range of conditions. Recently, advances in modeling the effects of illumination and pose have been accomplished using threedimensional (3-D) shape information coupled with reflectance models. Notable developments in understanding the effects of illumination include the nonexistence of illumination invariants, a characterization of the set of images of objects in fixed pose under variable illumination (the illumination cone), and the introduction of spherical harmonics and low-dimensional linear subspaces for modeling illumination. To generalize to novel conditions, either multiple images must be available to reconstruct 3-D shape or, if only a single image is accessible, prior information about the 3-D shape and appearance of faces in general must be used. The 3-D Morphable Model was introduced as a generative model to predict the appearances of an individual while using a statistical prior on shape and texture allowing its parameters to be estimated from single image. Based on these new understandings, face recognition algorithms have been developed to address the joint challenges of pose and lighting. In this paper, we review these developments and provide a brief survey of the resulting face recognition algorithms and their performance.
2010
In face authentication and other face biometric methods, an image of a person can be misclassified if the pose of their face is different than that of the training data unless there are steps taken to eliminate these inaccuracies. The methods in this paper are designed to improve the accuracy of a face authentication system when the pose between the input image and training images are different. First, an efficient 2D-to-3D integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination; Then, realistic virtual faces with different of pose are synthesized based on the personalized 3D face to characterize the face subspace; Finally, face recognition is conducted based on these representative virtual faces. Compared with other related works, this framework has the following advantages: 1) only one single frontal face is required for face recognition, which avoids the burd...
Face recognition is one of the most hot and challengeable technologies, which is based on biometrics, and also one of the most potential technologies. As the most natural and friendly identification method, automatic face recognition has become the important part of the next generation computing technology. 3D face recognition methods are able to overcome the problems resulting from illumination, expression or pose variations in 2D face recognition. Facial feature mainly concentrate on the eyes, nose and mouth, therefore, this paper mainly detects the characteristics of the three regions in the human face, then calculate the geometric characteristics of human face based on these characteristics point, including the straight-line Euclidean distance, curvature distance, area, angle and volume. The main contributions of the work is that the curve distance of two key feature points is added into the feature vector, which consists of Euclidean distance, curve distance, angle and volume. Experiment results show that the algorithm can recognize faces effectively.
Interest in face recognition technology has recently increased in academia and industry because of its wide potential application and its importance to meet the security needs of today's world. This paper proposes a method to tackle an important problem in 3D face recognition: the deformation of facial geometry that results from the expression changes of a subject. A framework composed of three subsystems: expression recognition system, expressional face recognition system and neutral face recognition system is proposed and implemented. The recognition of faces that were neutral or exhibited one expression, i.e. smiling, was tested on a database of 30 subjects. The results proved the feasibility of this framework.
Biometrics is an emerging area of bioengineering that pursues the characterization of a person by means of something that the person is or produces. Face recognition is a particularly attractive biometric challenge. Most of the face recognition research performed in the past used 2D intensity images. However, algorithms based on 2D images are not robust to changes of illumination in the environment or orientation of the subject. The ability to acquire 3D scans of human faces removes those ambiguities, since they capture the exact geometry of the subject, invariant to illumination and orientation changes. Unencumbered by those limitations, research in 3D face recognition is now beginning to address a different source of error in biometric recognition: facial geometry deformation caused by facial expressions, which can make 3D algorithms which treat 3D faces as rigid surfaces fail. In this paper, a 3D face recognition framework is proposed to tackle this problem. The framework is composed of three subsystems: expression recognition system, expressional face recognition system and neutral face recognition system. In particular, a system for the recognition of faces with one type of expression (smile) and neutral faces was implemented and tested on a database of 30 subjects. The results proved the feasibility of this framework.
2004
Abstract The reconstruction of 3D face models is mostly achieved by using 2D images. We compare the strengths and weaknesses of different image processing techniques for 3D face generation. It is anticipated that the optimal solution will be applied in the future for 3D face analysis and synthesis. As approaches to 3D face modelling, the paper presents: binocular stereo, using a stereo correspondence algorithm or manual triangulation; orthogonal views; photometric stereo.
Journal of Global Research in Computer Sciences, 2011
To provide a comprehensive survey, we not only categorize existing modeling techniques but also present detailed descriptions of representative methods within each category. In addition, relevant topics such as biometric modalities, system evaluation, and issues of illumination and pose variation are covered. 3D models hold more information of the face, like surface information, that can be used for face recognition or subject discrimination. This paper, gives the survey based techniques or methods for 3D face modeling, in this paper first step namely Model Based Face Reconstruction, secondly Methods of 3d Face models divided into three parts Holistic matching methods, Feature-based (structural) matching methods, Hybrid methods thirdly Other methods categorized into again three parts 2D based class, 3D Based class and 2D+3D based class are discussed. There are two underlying motivations for us to write this survey paper: the first is to provide an up-to-date review of the existing l...
pphmj.com
Face recognition is one of the most intensively studied topics in the field of computer vision and pattern recognition. In this paper, two statistical models of facial shadow and shape, embedded within a shape-from-shading (SFS) algorithm, are used to ...
2007
In our previous work we presented a new 2D-3D mixed face recognition scheme called Partial Principal Component Analysis (P 2 CA) [1]. The main contribution of P 2 CA is that it uses 3D data in the training stage but it accepts either 2D or 3D information in the recognition stage. We think that 2D-3D mixed approaches are the next step in face recognition research since most of surveillance or access control applications only dispose of a single camera which is used to acquire a single 2D texture image. Nevertheless, one of the main problems of our previous work was the enrollment of new persons in the database (gallery set) since a total of five different pictures are needed for getting the 180º texture maps (manual morphing). Thus, this work is focused on the automatic and fast creation of those 180º texture maps from only two images (frontal and profile views). Preliminary results show that there is not a significant degradation of the recognition accuracy when using this automatically and synthetically created gallery set instead of the one created by morphing the five views manually.
Journal of Global Research in Computer Sciences, 2011
In this paper, 2D photographs image divided into two parts; one part is front view (x, y) and side view (y, z). Necessary condition of this method is that position or coordinate of both images should be equal. We combine both images according to the coordinate then we will get 3D Models (x, y, z) but this 3D model is not accurate in size or shape. In defining other words, we will get 3D face model, refinement of 3D face through edit of point and smoothing process. Smoothing is performed to get the more realistic 3D face model for the person. We measure to compare the average time for modeling and compare the research result of our methods with different techniques, for this purpose we taken by two hypotheses (1) the average quality of our method will be higher than the 60% (2) it is faster compare to other in an average case (3) it is automated. First hypothesis is correct but the second tie up with other three methods and third found satisfactory.
Electrical Engineering …, 2011
The model-based face recognition approach is based on constructing a model of the human face, which is able to capture the facial variations. The basic knowledge of human face is highly utilized to create the model. In this paper, we try to address and review the approaches and techniques used in the last ten years for modeling the human face in the 3D domain. Our discussion also shows the pros and cons of each approach used in the 3D face modeling.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.