Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2013
…
13 pages
1 file
The recent advancement of motion recognition using Microsoft Kinect stimulates many new ideas in motion capture and virtual reality applications. Utilizing a pattern recognition algorithm, Kinect can determine the positions of different body parts from the user. However, due to the use of a single depth camera, recognition accuracy drops significantly when the parts are occluded. This hugely limits the
International Journal of Pattern Recognition and Artificial Intelligence
Microsoft Kinect, a low-cost motion sensing device, enables users to interact with computers or game consoles naturally through gestures and spoken commands without any other peripheral equipment. As such, it has commanded intense interests in research and development on the Kinect technology. In this article, we present a comprehensive survey on Kinect applications, and the latest research and development on motion recognition using data captured by the Kinect sensor. On the applications front, we review the applications of the Kinect technology in a variety of areas, including healthcare, education and performing arts, robotics, sign language recognition, retail services, workplace safety training, as well as 3D reconstructions. On the technology front, we provide an overview of the main features of both versions of the Kinect sensor together with the depth sensing technologies used, and review literatures on human motion recognition techniques used in Kinect applications. We provide a classification of motion recognition techniques to highlight the different approaches used in human motion recognition. Furthermore, we compile a list of publicly available Kinect datasets. These datasets are valuable resources for researchers to investigate better methods for human motion recognition and lower-level computer vision tasks such as segmentation, object detection, and human pose estimation.
Sensors, 2013
Motion capture systems have recently experienced a strong evolution. New cheap depth sensors and open source frameworks, such as OpenNI, allow for perceiving human motion on-line without using invasive systems. However, these proposals do not evaluate the validity of the obtained poses. This paper addresses this issue using a model-based pose generator to complement the OpenNI human tracker. The proposed system enforces kinematics constraints, eliminates odd poses and filters sensor noise, while learning the real dimensions of the performer's body. The system is composed by a PrimeSense sensor, an OpenNI tracker and a kinematics-based filter and has been extensively tested. Experiments show that the proposed system improves pure OpenNI results at a very low computational cost.
2013
Motion capture systems have recently experienced a strong evolution. New cheap depth sensors and open source frameworks, such as OpenNI, allow for perceiving human motion on-line without using invasive systems. However, these proposals do not evaluate the validity of the obtained poses. This paper addresses this issue using a model-based pose generator to complement the OpenNI human tracker. The proposed system enforces kinematics constraints, eliminates odd poses and filters sensor noise, while learning the real dimensions of the performer's body. The system is composed by a PrimeSense sensor, an OpenNI tracker and a kinematics-based filter and has been extensively tested. Experiments show that the proposed system improves pure OpenNI results at a very low computational cost.
Lecture Notes in Computer Science, 2014
Besides the emergence of many input devices and sensors, they are still unable to provide good and simple recognition of human postures and gestures. The recognition using simple algorithms implemented on top of these devices (like the Kinect) enlarges use cases for these gestures and postures to newer domains and systems. Our methods cuts the needed computation and allow the integration of other algorithms to run in parallel. We present a system able to track the hand in 3D, log its position and surface information during the time, and recognize hand postures and gestures. We present our solution based on simple geometric algorithms, other tried algorithms, and we discuss some concepts raised from our tests.
Indonesian Journal of Electrical Engineering and Computer Science, 2018
This paper presents a method to detect multiple human body postures using Kinect sensor. In this study, a combination of shape features and body joint points are used as input features. The Kinect sensor which used infrared camera to produce a depth image is suitable to be used in an environment that has varying lighting conditions. The method for human detection is done by processing the depth image and joint data (skeleton) which able to overcome several problems such as cluttered background, various articulated poses, and change in color and illumination. Then, the body joint coordinates found on the object are used to calculate the body proportion ratio. In the experiment, the average body proportions from three body parts are obtained to verify the suitableness of golden ratio usage in this work. Finally, the measured body proportion is compared with Golden Ratio to determine whether the found object is a real human body or not. This method is tested for various scenarios, where true positive human detection is high for various postures. This method able to detect a human body in low lighting and dark room. The average body proportions obtained from the experiment show that the value is close to the golden ratio value.
2015 International Conference on Healthcare Informatics, 2015
Microsoft Kinect camera and its skeletal tracking capabilities have been embraced by many researchers and commercial developers in various applications of real-time human movement analysis. In this paper, we evaluate the accuracy of the human kinematic motion data in the first and second generation of the Kinect system, and compare the results with an optical motion capture system. We collected motion data in 12 exercises for 10 different subjects and from three different viewpoints. We report on the accuracy of the joint localization and bone length estimation of Kinect skeletons in comparison to the motion capture. We also analyze the distribution of the joint localization offsets by fitting a mixture of Gaussian and uniform distribution models to determine the outliers in the Kinect motion data. Our analysis shows that overall Kinect 2 has more robust and more accurate tracking of human pose as compared to Kinect 1.
2013 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, 2013
We propose a real-time human motion capturing system to estimate the upper body motion parameters consisting of the positions of upper limb joints based on the depth images captured by using Kinect. The system consists of the action type classifier and the body part classifiers. For each action type, we have a body part classifier which segment the depth map into 16 different body parts of which the centroids can be linked to represent the human body skeleton. Finally, we exploit the temporal relationship between of each body part to correct the occlusion problem and determine the occluded depth information of the occluded body parts. In the experiments, we show that by using Kinect our system can estimate upper limb motion parameters of a human object in real-time effectively.
Smart Innovation, Systems and Technologies, 2020
Human-robot interaction requires a robust estimate of human motion in real-time. This work presents a fusion algorithm for joint center positions tracking from multiple depth cameras to improve human motion analysis accuracy. The main contribution is the proposed algorithm based on body tracking measurements fusion with an extended Kalman filter and anthropomorphic constraints, independent of sensors. As an illustration of the use of this algorithm, this paper presents the direct comparison of joint center positions estimated with a reference stereophotogrammetric system and the ones estimated with the new Kinect 3 (Azure Kinect) sensor and its older version the Kinect 2 (Kinect for Windows). The experiment was made in two parts, one for each model of Kinect, by comparing raw and merging body tracking data of two sided Kinect with the proposed algorithm. The proposed approach improves body tracker data for Kinect 3 which has not the same characteristics as Kinect 2. This study shows also the importance of defining good heuristics to merge data depending on how the body tracking works. Thus, with proper heuristics, the joint center position estimates are improved by at least 14.6 %. Finally, we propose an additional comparison between Kinect 2 and Kinect 3 exhibiting the pros and cons of the two sensors.
2018 IEEE 4th Middle East Conference on Biomedical Engineering (MECBME), 2018
This paper presents a human posture recognition system based on depth imaging. The proposed system is able to efficiently model human postures by exploiting the depth information captured by an RGB-D camera. Firstly, a skeleton model is used to represent the current pose. Human skeleton configuration is then analyzed in the 3D space to compute joint-based features. Our feature set characterizes the spatial configuration of the body through the 3D joint pairwise distances and the geometrical angles defined by the body segments. Posture recognition is then performed through a supervised classification method. To evaluate the proposed system we created a new challenging dataset with a significant variability regarding the participants and the acquisition conditions. The experimental results demonstrated the high precision of our method in recognizing human postures, while being invariant to several perturbation factors, such as scale and orientation change. Moreover, our system is able to operate efficiently, regardless illumination conditions in an indoor environment, as it is based depth imaging using the infrared sensor of an RGB-D camera.
2013 IEEE 4th International Conference on Cognitive Infocommunications (CogInfoCom), 2013
Non-verbal communications such as kinesthetics, or body language and posture are important codes used to establish and maintain interpersonal relationships. They can also be utilized for safe and efficient human robot interactions. A correct interpretation of the human activity through the analysis of certain spatio-temporal and dynamic parameters represent an outstanding benefit for the quality of human machine communication in general. This paper presents an effective markerless motion capture system provided by a mobile robot for sensing human activity, in non-invasive fashion. We present a physical model based method exploiting the embedded Kinect. Its performances are evaluated first comparing the results to those obtained with a precise 3D motion capture marker based system and to data obtained from a dynamic posturography platform. Then an experiment in real life conditions is performed to assess the system sensitivity to some gait disturbances.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
AIP Conf. Proc. 2845, 030008, 2023
Applied Sciences
Jurnal Teknologi, 2015
Journal of emerging technologies and innovative research, 2021
IOSR Journal of Computer Engineering, 2017
European Journal of Science and Technology, 2022
The 5th International Conference on Automation, Robotics and Applications, 2011
Lecture Notes in Computer Science, 2013
International Journal of Future Computer and Communication, 2015