Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
11 pages
1 file
In this work, we attempt to tackle the problem of skeletal tracking of a human body using the Microsoft Kinect sensor. We use cues from the RGB and depth streams from the sensor to fit a stick skeleton model to the human upper body. A variety of Computer Vision techniques are used with a bottom up approach to estimate the candidate head and upper body postitions using haar-cascade detectorsa and hand positions using skin segmentation data. The data is finally integrated with the Extended Distance Transform skeletonisation algorithm to obtain a fairly accurate estimate of the the skeleton parameters. The results presented show that this method can be extended to perform in real time.
2015
A Microsoft Kinect sensor has high resolution depth and RGB/depth sensing which is becoming available for wide spread use. It consists of object tracking, object detection and reorganization. It also recognizes human activity analysis, hand gesture analysis and 3D mapping. Face expression detection is widely used in computer human interface. Kinect depth camera can be used for detection of common face expressions. Face is tracked using MS Kinect which uses 2.0 SDK. This makes use of depth map to create a 3D frame model of the face. By recognizing the facial expressions from facial images, a number of applications in the field of human computer can be build. This paper describes about the working of Kinect and use of Kinect in Human Skeleton Tracking.
International Journal of Advancements in Computing Technology, 2012
Current research on skeleton tracking techniques focus on image processing in conjunction with a video camera constrained by bones and joint movement detection limits. The paper proposed 3D skeleton tracking technique using a depth camera known as a Kinect sensor with the ability to approximate human poses to be captured, reconstructed and displayed 3D skeleton in the virtual scene using OPENNI, NITE Primesense and CHAI3D open source libraries. The technique could perform the bone joint movement detections in real time with correct position tracking and display a 3D skeleton in a virtual environment with abilities to control 3D character movements for the future research.
AIP Conf. Proc. 2845, 030008, 2023
Motion detection and tracking systems used to quantify the mechanics of motion in many fields of research. Despite their high accuracy, industrial systems are expensive and sophisticated to use. However, it has shown imprecision in activity-delicate motions, to deal with the limitations. The Microsoft Kinect Sensor used as a practical and cheap device to access skeletal data, so it can be used to detect and track the body in different subjects such as, medical, sports, and analysis fields, because it has very good degrees of accuracy and its ability to track six people in real time. Sometimes research uses single or multiple Kinect devices based on different classification methods and approaches such as machine learning algorithms, neural networks, and others. Researches used global database like, CAD-60, MSRAction3D, 3D Action Pairs and others, while the others used their on database by collected them from different ages and genders. Some research connected a Kinect device to a robot to simulate movements, or the process done in virtual reality by using an avatar, where an unreal engine used to make it. In this research, we presents the related works in this subject, the used methods, database and applications.
Applied Sciences
The Azure Kinect, the successor of Kinect v1 and Kinect v2, is a depth sensor. In this paper we evaluate the skeleton tracking abilities of the new sensor, namely accuracy and precision (repeatability). Firstly, we state the technical features of all three sensors, since we want to put the new Azure Kinect in the context of its previous versions. Then, we present the experimental results of general accuracy and precision obtained by measuring a plate mounted to a robotic manipulator end effector which was moved along the depth axis of each sensor and compare them. In the second experiment, we mounted a human-sized figurine to the end effector and placed it in the same positions as the test plate. Positions were located 400 mm from each other. In each position, we measured relative accuracy and precision (repeatability) of the detected figurine body joints. We compared the results and concluded that the Azure Kinect surpasses its discontinued predecessors, both in accuracy and precis...
In this paper, we have proposed a system to keep track of human body movements in real time mode. The Kinect sensors are used to capture Depth and Audio streams. The system is designed by integration of two modules namely Kinect Module and Augmented Reality module. The kinect module performs Voice Recognition and captures depth images that are used by Augmented Reality module for computing the distance parameters. Augmented Reality module also captures real-time image data streams from high resolution camera. The system generates 3D module that is superimposed on real time data.
IOSR Journal of Computer Engineering, 2017
In this work we implemented a system to obtain human body parameter measurement without physically contacting the user. This implementation is contained the methods of obtaining 3D measurements using Kinect v2 depth sensor. The developed system at the initial stage is capable of detecting and obtaining personalized body parameters such as height, shoulder length, neck to hip length, hip to leg length and arm length by incorporating the necessary skeleton joints and front perimeter at chest, stomach and waist by incorporating the necessary 3D pixels. According to the results, the measurement on height and arm length of the person are relatively in good agreement with the actual values since the error is less than 5% and measurement has been taken in centimeters. Maximum 12% of an error incorporated of calculating front perimeter at chest. Experimental results obtained from the developed system are in acceptable range for dressing purpose and ultimately helpful for designing a real time 3D virtual dressing room.
Automatic human joint detection has been used in many application nowadays. In this paper, we propose an approach to detect full body human joint method using depth and color image. The proposed solution is divided into 3 stage, which is image preprocess stage, distance transform stage, and anthropometric constraint analysis stage. The output of our solution is a stickman model with the same pose as in the given input image. Our implementation is done by using a Microsoft Kinect RGB and depth camera with 480x640 image resolution. The performance of this solution is demonstrated on several human posture.
Journal of Manufacturing Systems, 2014
We present a novel approach to track full human body mesh with a single depth camera, e.g. Microsoft Kinect, using a template body model. The proposed observation-oriented tracking mainly targets at fitting the body mesh silhouette to the 2D user boundary in video stream by deforming the body. It is fast to be integrated into real-time or interactive applications, which is impossible with traditional iterative optimization based approaches. Our method is a composite of two main stages: user-specific body shape estimation and on-line body tracking. We first develop a novel method to fit a 3D morphable human model to the actual body shape of the user in front of the depth camera. A strategy, making use of two constrains, i.e. point clouds from depth images and correspondence between foreground user mask contour and the boundary of projected body model, is designed. On-line tracking is made possible in successive steps. At each frame, the joint angles of template skeleton are optimized towards the captured Kinect skeleton. Then, the aforementioned contour correspondence is adopted to adjust the projected body model vertices towards the contour points of foreground user mask, using a Laplacian deformation technique. Experimental results show that our method achieves fast and high quality tracking. We also show that the proposed method is benefit to three applications: virtual try-on, full human body scanning and applications in manufacturing systems.
2013 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, 2013
We propose a real-time human motion capturing system to estimate the upper body motion parameters consisting of the positions of upper limb joints based on the depth images captured by using Kinect. The system consists of the action type classifier and the body part classifiers. For each action type, we have a body part classifier which segment the depth map into 16 different body parts of which the centroids can be linked to represent the human body skeleton. Finally, we exploit the temporal relationship between of each body part to correct the occlusion problem and determine the occluded depth information of the occluded body parts. In the experiments, we show that by using Kinect our system can estimate upper limb motion parameters of a human object in real-time effectively.
Iraqi Journal for Electrical and Electronic Engineering, 2021
In this paper, a new method is proposed for people tracking using the human skeleton provided by the Kinect sensor, Our method is based on skeleton data, which includes the coordinate value of each joint in the human body. For data classification, the Support Vector Machine (SVM) and Random Forest techniques are used. To achieve this goal, 14 classes of movements are defined, using the Kinect Sensor to extract data containing 46 features and then using them to train the classification models. The system was tested on 12 subjects, each of whom performed 14 movements in each experiment. Experiment results show that the best average accuracy is 90.2 % for the SVM model and 99 % for the Random forest model. From the experiments, we concluded that the best distance between the Kinect sensor and the human body is one meter.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Intelligent Control and Automation, 2015
International Journal of Advanced Robotic Systems
MultiMedia Modeling, 2014
2015 International Conference on Healthcare Informatics, 2015
IEEE transactions on cybernetics, 2013
CVPR 2011 WORKSHOPS, 2011
Lecture Notes in Computer Science, 2014
2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1 (CVPR'06), 2000
Humanoid Robots: New Developments, 2007
ERCIM News, 2013
International Journal of Advanced Computer Science and Applications, 2021
Lecture Notes in Computer Science, 2014
Smart Innovation, Systems and Technologies, 2020
Synthesis and Analysis Techniques for the Human Body, 2006