Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2010
Figure 1. Overview of our method: from a 2D photo and their corresponding facial landmarks (a)-(b), the facial texture data is extracted by successive 2D triangular subdivisions (c)-(d), producing a new 3D face model (e)-(f).
The Visual Computer, 2009
In this paper, we present a 3D face photography system based on a facial expression training dataset, composed of both facial range images (3D geometry) and facial texture (2D photography). The proposed system allows one to obtain a 3D geometry representation of a given face provided as a 2D photography, which undergoes a series of transformations through the texture and geometry spaces estimated. In the training phase of the system, the facial landmarks are obtained by an active shape model (ASM) extracted from the 2D gray-level photography. Principal components analysis (PCA) is then used to represent the face dataset, thus defining an orthonormal basis of texture and another of geometry. In the reconstruction phase, an input is given by a face image to which the ASM is matched. The extracted facial landmarks and the face image are fed to the PCA basis transform, and a 3D version of the 2D input image is built. Experimental tests using a new dataset of 70 facial expressions belonging to ten subjects as training set show rapid reconstructed 3D faces which maintain spatial coherence similar to the human perception, thus corrobo-J.P. Mena-Chalco ( ) · R.M. Cesar Jr. rating the efficiency and the applicability of the proposed system.
2008 XXI Brazilian Symposium on Computer Graphics and Image Processing, 2008
This paper presents a 3D face photography system based on a small set of training facial range images. The training set is composed by 2D texture and 3D range images (i.e. geometry) of a single subject with different facial expressions. The basic idea behind the method is to create texture and geometry spaces based on the training set and transformations to go from one space to the other. The main goal of proposed approach is to obtain a geometry representation of a given face provided as a texture image, which undergoes a series of transformations through the texture and geometry spaces. Facial feature points are obtained by an active shape model (ASM) extracted from the 2D gray-level images. PCA then is used to represent the face dataset, thus defining an orthonormal basis of texture and range data. An input face is given by a gray-level face image to which the ASM is matched. The extracted ASM is fed to the PCA basis representation and a 3D version of the 2D input image is built. The experimental results on static images and video sequences using seven samples as training dataset show rapid reconstructed 3D faces which maintain spatial coherence similar to the human perception, thus corroborating the efficiency of our approach.
2018 24th International Conference on Pattern Recognition (ICPR), 2018
Automatically changing the expression and physical features of a face from an input image is a topic that has been traditionally tackled in a 2D domain. In this paper, we bring this problem to 3D and propose a framework that given an input RGB video of a human face under a neutral expression, initially computes his/her 3D shape and then performs a transfer to a new and potentially non-observed expression. For this purpose, we parameterize the rest shape-obtained from standard factorization approaches over the input video-using a triangular mesh which is further clustered into larger macro-segments. The expression transfer problem is then posed as a direct mapping between this shape and a source shape, such as the blend shapes of an off-the-shelf 3D dataset of human facial expressions. The mapping is resolved to be geometrically consistent between 3D models by requiring points in specific regions to map on semantic equivalent regions. We validate the approach on several synthetic and real examples of input faces that largely differ from the source shapes, yielding very realistic expression transfers even in cases with topology changes, such as a synthetic video sequence of a single-eyed cyclops.
2010 23rd SIBGRAPI Conference on Graphics, Patterns and Images, 2010
In this paper we introduce a Facial Animation system using real three-dimensional models of people, acquired by a 3D scanner. We consider a dataset composed by models displaying different facial expressions and a linear interpolation technique is used to produce a smooth transition between them. One-to-one correspondences between the meshes of each facial expression are required in order to apply the interpolation process. Instead of focusing in the computation of dense correspondence, some points are selected and a triangulation is defined, being refined by consecutive subdivisions, that compute the matchings of intermediate points. We are able to animate any model of the dataset, given its texture information for the neutral face and the geometry information for all the expressions along with the neutral face. This is made by computing matrices with the variations of every vertex when changing from the neutral face to the other expressions. The knowledge of the matrices obtained in this process makes it possible to animate other models given only the texture and geometry information of the neutral face. Furthermore, the system uses 3D reconstructed models, being capable of generating a three-dimensional facial animation from a single 2D image of a person. Also, as an extension of the system, we use artificial models that contain expressions of visemes, that are not part of the expressions of the dataset, and their displacements are applied to the real models. This allows these models to be given as input to a speech synthesis application in which the face is able to speak phrases typed by the user. Finally, we generate an average face and increase the displacements between a subject from the dataset and the average face, creating, automatically, a caricature of the subject.
2010
In this paper we present a system which automatically generates a 3D face model from a single frontal image of a face with the help of generic 3D model and allow to synthesis various expressions. Our system consists of three components. The first component detects the features like eyes, mouth, eyebrow and contour of face. After detecting features the second component automatically adapts the generic 3D model into face specific 3D model using geometric transformations. Once the model is ready six basic facial expressions are generated with the help of MPEG-4 facial animation parameters (FAPS). To generate transitions between these facial expressions we use 3D shape morphing between the corresponding face models and blend the corresponding textures. Our system has the advantage that it is fully automatic, robust and fast. It can be used in a variety of applications for which the accuracy of depths are not critical such as games, avatars, face recognition etc. We have tested and evaluated our system using standard database namely, BU-3DFE.
Multimedia Modeling, 1997
In this paper, we present a new semiautomatic method to reconstruct 3D facial model for animation from two orthogonal pictures taken from front and side views. The method is based on extracting the hair and face outlines and detecting interior features in the region of mouth, eyes, etc. We show how to use structured snakes for extracting the profile boundaries
Lecture Notes in Computer Science, 2006
We introduce an efficient approach for representing a human face using a limited number of images. This compact representation allows for meaningful manipulation of the face. Principal Components Analysis (PCA) utilized in our research makes possible the separation of facial features so as to build statistical shape and texture models. Thus changing the model parameters can create images with different expressions and poses. By presenting newly created faces for reviewers' marking in terms of intensities on masculinity, friendliness and attractiveness, we analyze relations between the parameters and intensities. With feature selections, we sort those parameters by their importance in deciding the three aforesaid aspects. Thus we are able to control the models and transform a new face image to be a naturally masculine, friendly or attractive one. In the PCA-based feature space, we can successfully transfer expressions from one subject onto a novel person's face.
This article presents the topic of Three Dimensional facial reconstruction approaches and some used methods. In this paper, we implement three-dimensional facial reconstruction algorithms based on various face databases using single image as an input and analyzing their performance on several aspects.Researchers proposed many applications for this issue, but most have their drawbacks and limitations. Secondly, we discuss about three-dimensional shapes and models based on facial techniques in detail. It concludes with an analysis of several implementations and with some technical discussions about 3D facial reconstruction
Eprint Arxiv 0912 0600, 2009
Multiview 3D face modeling has attracted increasing attention recently and has become one of the potential avenues in future video systems. We aim to make more reliable and robust automatic feature extraction and natural 3D feature construction from 2D features detected on a pair of frontal and profile view face images. We propose several heuristic algorithms to minimize possible errors introduced by prevalent nonperfect orthogonal condition and noncoherent luminance. In our approach, we first extract the 2D features that are visible to both cameras in both views. Then, we estimate the coordinates of the features in the hidden profile view based on the visible features extracted in the two orthogonal views. Finally, based on the coordinates of the extracted features, we deform a 3D generic model to perform the desired 3D clone modeling. Present study proves the scope of resulted facial models for practical applications like face recognition and facial animation.
2008
Abstract This paper presents a novel method for real-time animation of highly-detailed facial expressions based on a multi-scale decomposition of facial geometry into large-scale motion and fine-scale details, such as expression wrinkles.
Journal of Visual Communication and Image Representation, 2012
A 3D facial reconstruction and expression modeling system which creates 3D video sequences of test subjects and facilitates interactive generation of novel facial expressions is described. Dynamic 3D video sequences are generated using computational binocular stereo matching with active illumination and are used for interactive expression modeling. An individual's 3D video set is annotated with control points associated with face subregions. Dragging a control point updates texture and depth in only the associated ...
2002
This paper presents a method for creating photorealistic textured 3D face models of specific people for dynamic facial expression animation. The modeling approach reconstructs an accurate geometrical face model based on the individual face measurements containing both shape and texture information that is acquired from a laser range scanner. By using a semi-automatic registration and merging technique, a 3D dense face mesh is recovered fro m the partial range data obtained from arbitrary multiple different views. A model editing and adaptive meshing scheme is then used to refine the surface model. Having recovered the facial geometry, we add realism by mapping the model with high-resolution texture images. The resultant synthetic face has been shown to be visually similar to the true face. Based on the geometrically accurate surface model, a physically-based face model with a hierarchical structure of the skin, muscles and skull is developed from anatomical perspective. The dynamic displacement of nodes in the skin lattice under the influence of internal muscular forces is calculated by a numerical integration method. Using our technique, we have been able to generate highly realistic face models and flexible expressions.
IEEE Transactions on Visualization and Computer Graphics, 2000
We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-three tensor to build a bilinear face model with two attributes, identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.
2012
This paper presents a novel approach of facial expression synthesis and animation using real data sets of people acquired by 3D scanners. Three-dimensional faces are generated automatically through an interface provided by the scanners. The acquired raw human face surfaces went through a pre-processing stage using rigid and nonrigid registration methods, and then each of the face surface is synthesized using linear interpolation approaches and multivariate statistical methods. Point-to-point correspondences between face surfaces are required in order to do the reconstruction and synthesis processes. Our experiments focused on dense correspondence, as well as, to use some points or selected landmarks to compute the deformation of facial expressions. The placement of landmarks is based on the Facial Action Coding System (FACS) framework and the movements were analysed according to the motions of the facial features. We have also worked on reconstructing a 3D face surface from a single...
In this paper, we present the details of our research on 3D facial expression analysis and representations. Facial expression models are extensively used in real-time rendering and animation where interaction with virtual characters is present. The paper reviews the fundamental concepts in facial expression representations and looks at the possible methodologies for extracting related facial features from motion captured data. We propose the use of rubber-sheet transformations to extract the deformation details of a triangulated 3D facial motion captured sequence. This allows us to represent triangulated facial regions in terms of linear transforms, and also using them to represent the dynamics of facial expressions. Last, we outline our approach in analysing and representing 3D facial expression deformations in terms of facial expression clusters and symmetrical facial characteristics.
2005
The creation of personalised 3D characters has evolved to provide a high degree of realism in both appearance and animation. Further to the creation of generic characters the capabilities exist to create a personalised character from images of an individual. This provides the possibility of immersing an individual into a virtual world. Feature detection, particularly on the face, can be used to greatly enhance the realism of the model. To address this innovative contour based templates are used to extract an individual from four orthogonal views providing localisation of the face. Then adaptive facial feature extraction from multiple views is used to enhance the realism of the model.
2010
This paper presents a fully automatic approach to fitting a generic facial model to detailed range scans of human faces to reconstruct 3D facial models and textures with no manual intervention (such as specifying landmarks). A Scaling Iterative Closest Points (SICP) algorithm is introduced to compute the optimal rigid registrations between the generic model and the range scans with different sizes. And then a new template-fitting method, formulated in an optmization framework of minimizing the physically based elastic energy derived from thin shells, faithfully reconstructs the surfaces and the textures from the range scans and yields dense point correspondences across the reconstructed facial models. Finally, we demonstrate a facial expression transfer method to clone facial expressions from the generic model onto the reconstructed facial models by using the deformation transfer technique.
Pattern Recognition, 2005
We present a fully automated algorithm for facial feature extraction and 3D face modeling from a pair of orthogonal frontal and profile view images of a person's face taken by calibrated cameras. The algorithm starts by automatically extracting corresponding 2D landmark facial features from both view images, then compute their 3D coordinates. Further, we estimate the coordinates of the features that are hidden in the profile view based on the visible features extracted in the two orthogonal face images. The 3D coordinates of the selected feature points obtained from the images are used first to align, then to locally deform the corresponding facial vertices of the generic 3D model. Preliminary experiments to assess the applicability of the resulted models for face recognition show encouraging results.
2008 Digital Image Computing: Techniques and Applications, 2008
This paper surveys the topic of 3D face reconstruction using 2D images from a computer science perspective. Various approaches have been proposed as solutions for this problem but most have their limitations and drawbacks. Shape from shading, Shape from silhouettes, Shape from motion and Analysis by synthesis using morphable models are currently regarded as the main methods of attaining the facial information for reconstruction of its 3D counterpart. Though this topic has gained a lot of importance and popularity, a fully accurate facial reconstruction mechanism has not yet being identified due to the complexity and ambiguity involved. This paper discusses about the general approaches of 3D face reconstruction and their drawbacks. It concludes with an analysis of several implementations and some speculations about the future of 3D face reconstruction.
International Journal of Computer Vision, 2004
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.