Academia.eduAcademia.edu

A New Depth-Based Function for 3D Hand Motion Tracking

Abstract

Model-based methods to the tracking of an articulated hand in a video sequence generally use a cost function to compare the hand pose with a parametric three-dimensional (3D) hand model. This comparison allows adapting the hand model parameters and it is thus possible to reproduce the hand gestures. Many proposed cost functions exploit either silhouette or edge features. Unfortunately, these functions cannot deal with the tracking of complex hand motion. This paper presents a new depth-based function to track complex hand motion such as opening and closing hand. Our proposed function compares 3D point clouds stemming from depth maps. Each hand point cloud is compared with several clouds of points which correspond to different model poses in order to obtain the model pose that is close to the hand one. To reduce the computational burden, we propose to compute a volume of voxels from a hand point cloud, where each voxel is characterized by its distance to that cloud. When we place a model point cloud inside this volume of voxels, it becomes fast to compute its distance to the hand point cloud. Compared with other well-known functions such as the directed Hausdorff distance(Huttenlocher et al., 1993), our proposed function is more adapted to the hand tracking problem and it is faster than the Hausdorff function.