Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
In this paper, we have proposed a system to keep track of human body movements in real time mode. The Kinect sensors are used to capture Depth and Audio streams. The system is designed by integration of two modules namely Kinect Module and Augmented Reality module. The kinect module performs Voice Recognition and captures depth images that are used by Augmented Reality module for computing the distance parameters. Augmented Reality module also captures real-time image data streams from high resolution camera. The system generates 3D module that is superimposed on real time data.
AIP Conf. Proc. 2845, 030008, 2023
Motion detection and tracking systems used to quantify the mechanics of motion in many fields of research. Despite their high accuracy, industrial systems are expensive and sophisticated to use. However, it has shown imprecision in activity-delicate motions, to deal with the limitations. The Microsoft Kinect Sensor used as a practical and cheap device to access skeletal data, so it can be used to detect and track the body in different subjects such as, medical, sports, and analysis fields, because it has very good degrees of accuracy and its ability to track six people in real time. Sometimes research uses single or multiple Kinect devices based on different classification methods and approaches such as machine learning algorithms, neural networks, and others. Researches used global database like, CAD-60, MSRAction3D, 3D Action Pairs and others, while the others used their on database by collected them from different ages and genders. Some research connected a Kinect device to a robot to simulate movements, or the process done in virtual reality by using an avatar, where an unreal engine used to make it. In this research, we presents the related works in this subject, the used methods, database and applications.
2012
This research developed an application that could tracks and locates human's presence and position in indoor environment using multiple depth-cameras. Kinect as the most affordable device that equipped with depthcamera was used in this work. The application obtains stream data from Kinect and analyzes presence of human using skeletal tracking library on Kinect for Windows SDK v1. The final application also visualizes human location on 3D environment using Windows Presentation Foundation (WPF) 4.0. In order to visualize 3D object correctly, the application also took into account the coverage that may intersect when two Kinects were placed in adjacent position so that the final human location is combined. In the end, application was tested in 3 different scenarios and it's found that the average error in determining human location was 0.13589 meters.
2015
A Microsoft Kinect sensor has high resolution depth and RGB/depth sensing which is becoming available for wide spread use. It consists of object tracking, object detection and reorganization. It also recognizes human activity analysis, hand gesture analysis and 3D mapping. Face expression detection is widely used in computer human interface. Kinect depth camera can be used for detection of common face expressions. Face is tracked using MS Kinect which uses 2.0 SDK. This makes use of depth map to create a 3D frame model of the face. By recognizing the facial expressions from facial images, a number of applications in the field of human computer can be build. This paper describes about the working of Kinect and use of Kinect in Human Skeleton Tracking.
The 5th International Conference on Automation, Robotics and Applications, 2011
Gesture recognition is essential for human-machine interaction. In this paper we propose a method to recognize human gestures using a Kinect® depth camera. The camera views the subject in the front plane and generates a depth image of the subject in the plane towards the camera. This depth image is then used for background removal, followed by generation of the depth profile of the subject. In addition to this, the difference between subsequent frames gives the motion profile of the subject and is used for recognition of gestures. These allow the efficient use of depth camera to successfully recognize multiple human gestures. The result of a case study involving 8 gestures is shown. The system was trained using a multi class Support Vector Machine.
2018
In this paper we introduce a virtual reality room setup using commonly available, moderately expensive devices. We implement head position tracking with a Microsoft Kinect V2 sensor, and use an Android device with gyroscope to track user head rotation and to display the virtual world. Our workstation which handles the Kinect can also insert the point cloud of the user in the virtual world and can inspect its interaction in real time.
2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2015
An increasing number of systems use indoor positioning for many scenarios such as asset tracking, health care, games, manufacturing, logistics, shopping, and security. Many technologies are available and the use of depth cameras is becoming more and more attractive as this kind of device becomes affordable and easy to handle. This paper contributes to the effort of creating an indoor positioning system based on low cost depth cameras (Kinect). A method is proposed to optimize the calibration of the depth cameras, to describe the multi-camera data fusion and to specify a global positioning projection to maintain the compatibility with outdoor positioning systems. The monitoring of the people trajectories at home is intended for the early detection of a shift in daily activities which highlights disabilities and loss of autonomy. This system is meant to improve homecare health management at home for a better end of life at a sustainable cost for the community.
2018
Hand gesture recognition has been granted as one of the emerging fields in research today providing a natural way of communication between man and a machine. Gestures are some forms of body motions which a person expresses when doing a work or giving a reply. Human body tracking is a well studied topic in todays era of Human Computer Interaction and it can be formed by the virtue of human skeleton structures. These skeleton structures have been detected successfully due to the smart progress of some devices, used to measure depth. Human body movements have been viewed using these depth sensors which can provide sufficient accuracy while tracking full body in real time mode with low cost. In reality action and reaction activities are hardly periodic in a multi person perspective situation. Also recognizing their complex a-periodic gestures are highly challenging for detection in surveillance system.
Existing video communication systems, used in private business or video teleconference, show a part of the body of users on display only. This situation does not have realistic sensation, because showing a part of body on display does not make users feel communicating with others close to each other. It rather makes them feel communicating with the others who are far away. Furthermore, although these existing communication systems have file transfer function such as sending and receiving file data, it does not use intuitive manipulation. It uses mouse or touching display only. In order to solve these problems, we propose 3D communication system supported by Kinect and HMD (Head Mount Display) to provide users communications with realistic sensation and intuitive manipulation. This system is able to show whole body of users on HMD as if they were in the same room by 3D reconstruction. It also enables users to transfer and share information by intuitive manipulation to virtualized objects toward the other users. The result of this paper is a system that extracts human body by using Kinect, reconstructs extracted human body on HMD, and also recognizes users’ hands to be able to manipulate AR objects by a hand.
International Journal of Computer and Electrical Engineering, 2018
The paper discusses Sign Language Translating system using depth and coordinates sensing. It defines the major units of system; the first is based on learning from the impaired person and defining custom text accordingly; the second one is based on speech recognition and converting it into sign language. The third one is predefined American Sign Language translation based on actions performed by the impaired person. The system works completely wireless and detects signs and actions through the sensors placed in the system. The system performs submission on the entire axis of each coordinate and matches it with costs of each sign stored in the system. The sign with the nearest cost gets displayed to the screen and speaks out by the system on the behalf of impaired person.
2017
The objective of this dissertation research is to use Kinect sensor, a motion sensing input device, to develop an integrated software system that can be used for tracking non-compliant activity postures of consented health-care workers for assisting the workers’ compliance to best practices, allowing individualized gestures for privacy-aware user registration, movement recognition using rule-based algorithm, real-time feedback, and exercises data collection. The research work also includes developing a graphical user interface and data visualization program for illustrating statistical information for administrator, as well as utilizing cloud based database system used for
International Journal of Pattern Recognition and Artificial Intelligence
Microsoft Kinect, a low-cost motion sensing device, enables users to interact with computers or game consoles naturally through gestures and spoken commands without any other peripheral equipment. As such, it has commanded intense interests in research and development on the Kinect technology. In this article, we present a comprehensive survey on Kinect applications, and the latest research and development on motion recognition using data captured by the Kinect sensor. On the applications front, we review the applications of the Kinect technology in a variety of areas, including healthcare, education and performing arts, robotics, sign language recognition, retail services, workplace safety training, as well as 3D reconstructions. On the technology front, we provide an overview of the main features of both versions of the Kinect sensor together with the depth sensing technologies used, and review literatures on human motion recognition techniques used in Kinect applications. We provide a classification of motion recognition techniques to highlight the different approaches used in human motion recognition. Furthermore, we compile a list of publicly available Kinect datasets. These datasets are valuable resources for researchers to investigate better methods for human motion recognition and lower-level computer vision tasks such as segmentation, object detection, and human pose estimation.
Lecture Notes in Computer Science, 2014
Besides the emergence of many input devices and sensors, they are still unable to provide good and simple recognition of human postures and gestures. The recognition using simple algorithms implemented on top of these devices (like the Kinect) enlarges use cases for these gestures and postures to newer domains and systems. Our methods cuts the needed computation and allow the integration of other algorithms to run in parallel. We present a system able to track the hand in 3D, log its position and surface information during the time, and recognize hand postures and gestures. We present our solution based on simple geometric algorithms, other tried algorithms, and we discuss some concepts raised from our tests.
Jurnal Teknologi, 2015
Microsoft Kinect has been identified as a potential alternative tool in the field of motion capture due to its simplicity and low cost. To date, the application and potential of Microsoft Kinect has been vigorously explored especially for entertainment and gaming purposes. However, its motion capture capability in terms of repeatability and reproducibility is still not well addressed. Therefore, this study aims to explore and develop a motion capture system using Microsoft Kinect; focusing on developing the interface, motion capture protocol as well as measurement analysis. The work is divided into several stages which include installation (Microsoft Kinect and MATLAB); parameters and experimental setup, interface development; protocols development; motion capture; data tracking and measurement analysis. The results are promising, where the variances are found to be less than 1% for both repeatability and reproducibility analysis. This proves that the current study is significant an...
2013
The recent advancement of motion recognition using Microsoft Kinect stimulates many new ideas in motion capture and virtual reality applications. Utilizing a pattern recognition algorithm, Kinect can determine the positions of different body parts from the user. However, due to the use of a single depth camera, recognition accuracy drops significantly when the parts are occluded. This hugely limits the
This paper deals with intuitive ways of attractive Human Interaction Mirror (HIM) by using the Microsoft Kinect sensor. Our work is mainly based on the extraction of human body by video stream and makes the user interaction. The fusion of user's body motion and 3D cloth model is virtually displayed in our HIM-Mirror. The virtual image is rated by hybridization of skeletal tracking algorithm and PCA based face recognition algorithm. The perfect match of 3D cloth to the superimposed image is done by Skincolor detection and the clothes are adapted to the body of the user in front of the interactive mirror. Kinect SDK is used for various fundamental function and for tracking process and the entire application is developed in .NET framework.
Advances in Science, Technology and Engineering Systems Journal, 2017
This paper is an extension of work originally presented and published in IEEE International Multidisciplinary Conference on Engineering Technology (IMCET). This work presents a design and implementation of a moving human tracking system with obstacle avoidance. The system scans the environment by using Kinect, a 3D sensor, and tracks the center of mass of a specific user by using Processing, an open source computer programming language. An Arduino microcontroller is used to drive motors enabling it to move towards the tracked user and avoid obstacles hampering the trajectory. The implemented system is tested under different lighting conditions and the performance is analyzed using several generated depth images.
IEEE, 2016
This paper throws light on the aspects of human following robot using Kinect technology. The goal of this study is to eliminate the limitations of human robots by presentation of a Human follower robot which moves and is directly controlled by the human in front of the camera. This paper is focusing on the fact, how the Kinect sensor captures the 3-d information of a scene and recognizes the human body by retrieving the depth information. Microsoft Kinect is one of the latest advancement in the field of human and computer interaction (HCI). Various studies are being conducted using Kinect in the field of computer vision and action recognition techniques. In this process, the distance of human from the Kinect camera is being calculated in the form of the depth data, which is being Instantiated to move the robot forward and backwards. The robot being used here is a two legged robot powered by Arduino Uno microcontroller and a Bluetooth module (HC-05) for the interfacing and data transfer unto the robot. The whole process has a wide range of applications based upon application areas. The robot is independent of the particular human standing in front of him and will follow any other user after losing the field of view during its movements.
In this work, we attempt to tackle the problem of skeletal tracking of a human body using the Microsoft Kinect sensor. We use cues from the RGB and depth streams from the sensor to fit a stick skeleton model to the human upper body. A variety of Computer Vision techniques are used with a bottom up approach to estimate the candidate head and upper body postitions using haar-cascade detectorsa and hand positions using skin segmentation data. The data is finally integrated with the Extended Distance Transform skeletonisation algorithm to obtain a fairly accurate estimate of the the skeleton parameters. The results presented show that this method can be extended to perform in real time.
Moving 3D modeling is widely used in many areas such as cinema, gaming, robot control and training. Kinect is a system that can detect human movements and send them to computers. It was developed by Microsoft to play games for the Xbox game console and is being used over time for applications in other areas. This work aims to create a 3D human model using kinect.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.