Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2004
…
13 pages
1 file
A method is proposed to track the full hand motion from 3D points reconstructed using a stereoscopic set of cameras. This approach combines the advantages of methods that use 2D motion (e.g. optical flow), and those that use a 3D reconstruction at each time frame to capture the hand motion. Matching either contours or a 3D reconstruction against a 3D hand model is usually very difficult due to self-occlusions and the locally-cylindrical structure of each phalanx in the model, but our use of 3D point trajectories constrains the motion and overcomes these problems. Our tracking procedure uses both the 3D point matches between two time frames and a smooth surface model of the hand, build with implicit surface. We used animation techniques to represent faithfully the skin motion, especially near joints. Robustness is obtained by using an EM version of the ICP algorithm for matching points between consecutive frames, and the tracked points are then registered to the surface of the hand model. Results are presented on a stereoscopic sequence of a moving hand, and are evaluated using a side view of the sequence.
ERCIM News, 2013
Actes du Colloque Scientifique …, 1999
We address the issue of 3D hand gesture analysis by monoscopic vision without body markers. A 3D articulated model is registered with images sequences. We compare several registration evaluation functions (edge distance, non-overlapping surface) and optimisation methods (Levenberg-Marquardt, downhill simplex and Powell). Biomechanical constraints are integrated into the minimisation algorithm to constrain registration to realistic postures. Results on image sequences are presented. Potential application include hand gesture acquisition and human machine interface.
2009
In this paper we present an approach for animating a virtual hand model with the animation data extracting from hand tracking results. We track the hand by using particle filters and deformable contour templates with a web camera; then we extend the 2D tracking results into 3D animation data using inverse kinematics method; finally, local frame based method is proposed to simulate a 3D virtual hand with the 3D animation data. Our system performs in real time.
Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition, 1998
We propose a method to estimate the pose of a hand in a sequence of stereo images. This is a difficult problem since a hand is a complex object with a high number of degrees of freedom, and automatically segment the hand in the images is not easy. Our method is intending to solve these problems. Two video cameras feed two images to a stereocorrelation algorithm, allowing a 3D reconstruction of the scene. Then a 3D articulated model of the hand, made of truncated cones and spheres, is fitted to this reconstruction in order to estimate pose of palm and fingers. We are dealing with model-based tracking of a hand movement, in which we suppose that the pose of the hand is known in the first images.
Workshop on Synthetic-Natural Hybrid Coding and …, 1999
We address the issue of 3D hand gesturee modelling given only one camera input and without body markers. A 3D articulated model of the hand is first adjusted to the user's hand morphology with respect to anthropometric constraints. Then it is registered with image sequences by minimising an error function. Several functions (edge distances, non-overlapping surface) and optimisation methods (Levenberg-Marquardt, downhill simplex and Powell) are compared. Biomechanical constraints are integrated into the minimisation algorithm to force registration to realistic postures. Results on hand gesture image sequences are finally presented. Potential target applications include SNHC coding of human movements, virtual character animation, human-machine interaction and sign language recognition.
2002
Visually capturing human hand motion requires estimating the 3D hand global pose as well as its local finger articulations. This is a challenging task that requires a search in a high dimensional space due to the high degrees of freedom that fingers exhibit and the self occlusions caused by global hand motion. In this paper we propose a divide and conquer approach to estimate both global and local hand motion. By looking into the palm and extra feature points provided by fingers, the hand pose is determined from the palm using Iterative Closed Point (ICP) algorithm and factorization method. The hand global pose serves as the base frame for the finger motion capturing. Noticing the natural hand motion constraints, we propose an efficient tracking algorithm based on sequential Monte Carlo technique for tracking finger motion. To enhance the accuracy, pose estimations and finger articulation tracking are performed in an iterative manner. Our experiments show that our approach is accurate and robust for natural hand movements.
Model-based methods to the tracking of an articulated hand in a video sequence generally use a cost function to compare the hand pose with a parametric three-dimensional (3D) hand model. This comparison allows adapting the hand model parameters and it is thus possible to reproduce the hand gestures. Many proposed cost functions exploit either silhouette or edge features. Unfortunately, these functions cannot deal with the tracking of complex hand motion. This paper presents a new depth-based function to track complex hand motion such as opening and closing hand. Our proposed function compares 3D point clouds stemming from depth maps. Each hand point cloud is compared with several clouds of points which correspond to different model poses in order to obtain the model pose that is close to the hand one. To reduce the computational burden, we propose to compute a volume of voxels from a hand point cloud, where each voxel is characterized by its distance to that cloud. When we place a model point cloud inside this volume of voxels, it becomes fast to compute its distance to the hand point cloud. Compared with other well-known functions such as the directed Hausdorff distance(Huttenlocher et al., 1993), our proposed function is more adapted to the hand tracking problem and it is faster than the Hausdorff function.
Journal of Computational Design and Engineering, 2014
In this paper, we propose a new method of reconstructing the hand models for individuals, which include the link structure models, the homologous skin surface models and the homologous tetrahedral mesh models in a reference posture. As for the link structure model, the local coordinate system related to each link consists of the joint rotation center and the axes of joint rotation, which can be estimated based on the trajectories of optimal markers on the relative skin surface region of the subject obtained from the motion capture system. The skin surface model is defined as a three-dimensional triangular mesh, obtained by deforming a template mesh so as to fit the landmark vertices to the relative marker positions obtained motion capture system. In this process, anatomical dimensions for the subject, manually measured by a caliper, are also used as the deformation constraints.
Pattern Recognition, 2011
Vision-based hand motion capturing approaches play a critical role in human computer interface owing to its non-invasiveness, cost effectiveness, and user friendliness. This work presents a multi-view vision-based method to capture hand motion. A 3-D hand model with structural and kinematical constraints is developed to ensure that the proposed hand model behaves similar to an ordinary human hand. Human hand motion in a high degree of freedom space is estimated by developing a separable state based particle filtering (SSBPF) method to track the finger motion. By integrating different features, including silhouette, Chamfer distance, and depth map in different view angles, the proposed motion tracking system can capture the hand motion parameter effectively and solve the self-occlusion problem of the finger motion. Experimental results indicate that the hand joint angle estimation generates an average error of 111.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
The Imaging Science Journal, 2008
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000
2017 IEEE International Conference on Computer Vision Workshops (ICCVW), 2017
Proceedings of the International Conference on Computer Vision Theory and Applications, 2010
Proceedings, International Conference on Image Analysis and Recognition, 2012
ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020
IEEE ISUVR 2013, 2013
Proceedings of the 16th International Conference on PErvasive Technologies Related to Assistive Environments
2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1 (CVPR'06), 2000
Lecture Notes in Computer Science, 2013
Advances in Intelligent Systems and Computing, 2015
Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015
Advances in Visual Computing, 2015