Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2017, Journal on Multimodal User Interfaces
…
9 pages
1 file
This paper describes a novel system for ultrasonic gesture recognition targeted at handheld devices, such as smartphones. Unlike previous proposals, the system obtains high update rate range estimates for the user's hand. The range of the user's hand is determined based on the Round Trip Time of ultrasonic pulses emitted by a transducer on the device, reflected by the hand and received at multiple sensors on the device. The range estimates, coupled with other information extracted from the reflected ultrasonic signals, are used for gesture recognition. Gesture recognition is performed by means of a multi-class hierarchical binary support vector machine. The high update rate is enabled by the use of compact wideband transducers. The ultrasonic pulses are short in duration and utilize Linear Frequency Modulation compression to achieve high resolution in Time Of Arrival estimation. The impact of multipath is reduced by the use of Frequency Hopping. A system prototype using one transmitter and four receivers was found to achieve gesture detection sensitivity and specificity of 99% and 99%, respectively, and classification accuracy of 96% for 7 users (5 males, 2 females) with around 500 repetitions per user for a set of 7 gesture types.
2017 25th European Signal Processing Conference (EUSIPCO), 2017
Hand gestures are tools for conveying information, expressing emotion, interacting with electronic devices or even serving disabled people as a second language. A gesture can be recognized by capturing the movement of the hand, in real time, and classifying the collected data. Several commercial products such as Microsoft Kinect, Leap Motion Sensor, Synertial Gloves and HTC Vive have been released and new solutions have been proposed by researchers to handle this task. These systems are mainly based on optical measurements, inertial measurements, ultrasound signals and radio signals. This paper proposes an ultrasonic-based gesture recognition system using AOA (Angle of Arrival) information of ultrasonic signals emitted from a wearable ultrasound transducer. The 2-D angles of the moving hand are estimated using multi-frequency signals captured by a fixed receiver array. A simple redundant dictionary matching classifier is designed to recognize gestures representing the numbers from '0' to '9' and compared with a neural network classifier. Average classification accuracies of 95.5% and 94.4% are obtained, respectively, using the two classification methods.
The aim of this paper is to present a system that detects the hand gesture motions using the principle of Doppler Effect. Ultrasonic waves are transmitted by a sensor module and are reflected by a moving hand. The received waves are frequency shifted due to Doppler Effect. These frequency shifts in the ultrasonic waves are characterized to determine gestures. The gestures once recognized are mapped into commands to control the movement of a small robot. Current research work spans only four gestures: front movement, back movement, move left and move right.
2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), 2014
Optical 3D imagers for gesture recognition suffer from large size and high power consumption. Their performance depends on ambient illumination and they generally cannot operate in sunlight. These factors have prevented widespread adoption of gesture interfaces in energy-and volume-limited environments such as tablets and smartphones. Wearable mobile devices, too small to incorporate a touchscreen more than a few fingers wide, would benefit from a small, low-power gestural interface. Gesture recognition using sound is an attractive alternative to overcome these difficulties due to the potential for chip-scale size, low power consumption, and ambient light insensitivity. Using pulse-echo time-of-flight, MEMS ultrasonic rangers work over distances of up to a meter and achieve sub-mm ranging accuracy . Using a 2-dimensional array of transducers, objects can be localized in 3 dimensions. This paper presents an ultrasonic 3D gesture-recognition system that uses a custom transducer chip and an ASIC to sense the location of targets such as hands. The system block diagram is shown in .1.1. Targets are localized using pulse-echo time-of-flight methods. Each of the 10 transceiver channels interfaces with a MEMS transducer, and each includes a transmitter and a readout circuit. Echoes from off-axis targets arrive with different phase shifts for each element in the array. The off-chip digital beamformer realigns the signal phase to maximize the SNR and determine target location.
Applied Sciences, 2021
This paper presents an effective signal processing scheme of hand gesture recognition with a superior accuracy rate of judging identical and dissimilar hand gestures. This scheme is implemented with the air sonar possessing a pair of cost-effective ultrasonic emitter and receiver along with signal processing circuitry. Through the circuitry, the Doppler signals of hand gestures are obtained and processed with the developed algorithm for recognition. Four different hand gestures of push motion, wrist motion from flexion to extension, pinch out, and hand rotation are investigated. To judge the starting time of hand gesture occurrence, the technique based on continuous short-period analysis is proposed. It could identify the starting time of the hand gesture with small-scale motion and avoid faulty judgment while no hand in front of the sonar. Fusing the short-time Fourier transform spectrogram of hand gesture to the image processing techniques of corner feature detection, feature desc...
Proceedings of the 17th annual international symposium on International symposium on wearable computers - ISWC '13, 2013
We propose an activity and context recognition method where the user carries a neck-worn receiver comprising a microphone, and small speakers on his wrists that generate ultrasounds. The system recognizes gestures on the basis of the volume of the received sound and the Doppler effect. The former indicates the distance between the neck and wrists, and the later indicates the speed of motions. Thus, our approach substitutes the wired or wireless communication typically required in body area motion sensing networks by ultrasounds. Our system also recognizes the place where the user is in and the people who are near the user by ID signals generated from speakers placed in rooms and on people. The strength of the approach is that, for offline recognition, a simple audio recorder can be used for the receiver. We evaluate the approach in one scenario on nine gestures/activities with 10 users. Evaluation results confirmed that when there was no environmental sound generated from other people, the recognition rate was 87% on average. When there was environmental sound generated from other people, we compare approach ultrasound-based recognition which uses only the feature value of ultrasound against standard approach, which uses feature value of ultrasound and environmental sound. Results for the proposed approach are 65%, for the standard approach are 57%.
IEEE Access, 2021
The aim of this work is to prove that it is possible to develop a system able to detect gestures based only on ultrasonic signals and Edge devices. A set of 7 gestures plus idle has been defined, being possible to combine them to increase the recognized gestures. In order to recognize them, Ultrasound transceivers will be used to detect the 2 dimensional gestures. The Edge device approach implies that the whole data is processed in the device at the network edge rather than depending on external devices or services such as Cloud Computing. The system presented in this paper has been proven to be able to measure Time of Flight (ToF) signals that can be used to recognize multiple gestures by the integration of two transceivers, with an accuracy between 84.18% and 98.4%. Due to the optimization of the preprocessing correlation technique to extract the ToF from the echo signals and our specific firmware design to enable the parallelization of concurrent processes, the system can be implemented as an Edge Device. INDEX TERMS Edge computing, gesture recognition, human system interaction (HSI), ultrasound.
Acoustics Speech and Signal Processing 1988 Icassp 88 1988 International Conference on, 2009
This paper presents a new device based on ultrasonic sensors to recognize one-handed gestures. The device uses three ultrasonic receivers and a single transmitter. Gestures are characterized through the Doppler frequency shifts they generate in reflections of an ultrasonic tone emitted by the transmitter. We show that this setup can be used to classify simple one-handed gestures with high accuracy. The ultrasonic doppler based device is very inexpensive -20 USD for the whole setup including the acquisition system, and computationally efficient as compared to most traditional devices (e.g. video).
IEEE Access
Advancement in smartphones has facilitated the investigation of new modalities of humanmachine interaction, including communication through touch, voice, and gestures. In-depth, the researchers examined the problem of recognizing distinct gestures (surface, hand, and motion). However, the gesture recognition algorithm pitches discontinuity while the user performs the subsequent continuous gesture. The discontinuity may occur due to the selection of a delimiter to differentiate between successive motions or the employment of a complex algorithm to boost the accuracy of gesture detection, which takes significant time to recognize the gesture before a user may enter the next gesture. Further, gesture recognition based on template matching, machine learning models, and neural networks requires a lot of storage space, processing resources, or both, which are resource-intensive for smartphones. This research proposes a novel Axis-Point Continuous Motion Gesture (APCMG) recognition algorithm that uses accelerometer sensor data to recognizes continuous motion gestures in real time. The algorithm has low computational complexity and easily implemented on resource-constrained devices with minimal computing cost, memory, and energy. The prime objective of the APCMG is to find the start and end of a gesture from a continuous stream of accelerometer sensor data and recognize the gesture in real-time. To demonstrate the APCMG efficacy, the experimental simulation of the Android application for dialing a phone number is considered. The App acknowledges 12 continuous gestures corresponding to 0 to 9 number, delete, and calls termination. The experimental simulations collected 7500 gestures samples from the 25 volunteers. The algorithm efficiently recognizes isolated and continuous gestures with 95% and 94% accuracy, respectively. The proposed algorithm efficiently recognizes isolated and continuous gestures with minimal energy consumption. INDEX TERMS Accelerometer, axis point, gesture recognition, continuous motion gesture (CMG). I. INTRODUCTION A. BACKGROUND Smartphones equipped with sophisticated sensors are ushering humans into a new paradigm of human-machine interaction (HMI), which is user-friendly and natural. Smartphones have a variety of inbuilt sensors, such as camera, The associate editor coordinating the review of this manuscript and approving it for publication was Michail Kiziroglou. microphones, accelerometer, and gyroscopes. In HMI, sensors capture human gestures, specifically voice, speech, and movement, which are an alternative way of communication with smartphones [1]. Human gestures are employed for a variety of purposes, like biometric authentication [2], handsfree interaction [3], visually blind persons [4], activity recognition [5], and fall detection [6]. Gestures are meaningful physical movements of the fingers, hands, arms, head, and face-i.e., communicative body
An algorithmic framework is proposed to process acceleration and surface electromyographic (SEMG) signals for gesture recognition. It includes a novel segmentation scheme, a score-based sensor fusion scheme, and two new features. A Bayes linear classifier and an improved dynamic time-warping algorithm are utilized in the framework. In addition, a prototype system, including a wearable gesture sensing device (embedded with a three-axis accelerometer and four SEMG sensors) and an application program with the proposed algorithmic framework for a mobile phone, is developed to realize gesture-based real-time interaction. With the device worn on the forearm, the user is able to manipulate a mobile phone using 19 predefined gestures or even personalized ones. Results suggest that the developed prototype responded to each gesture instruction within 300 ms on the mobile phone, with the average accuracy of 95.0% in user-dependent testing and 89.6% in user-independent testing. Such performance during the interaction testing, along with positive user experience questionnaire feedback, demonstrates the utility of the framework. Fig. 1. Gesture-based interaction prototype with the gesture-capturing device designed to be worn around the forearm.
ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal, 2013
Many mobile devices embed nowadays inertial sensors. This enables new forms of human-computer interaction through the use of gestures (movements performed with the mobile device) as a way of communication. This paper presents an accelerometer-based gesture recognition system for mobile devices which is able to recognize a collection of 10 different hand gestures. The system was conceived to be light and to operate in a user-independent manner in real time. The recognition system was implemented in a smart phone and evaluated through a collection of user tests, which showed a recognition accuracy similar to other state-of-the art techniques and a lower computational complexity. The system was also used to build a human-robot interface that enables controlling a wheeled robot with the gestures made with the mobile phone
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Proceedings of the 13th International Joint Conference on Computational Intelligence
Proceedings of the 5th Augmented Human International Conference on - AH '14, 2014
Research Journal of Applied Sciences, Engineering and Technology, 2014
International Journal of Computer Applications, 2010
Advances in Intelligent Systems and Computing, 2018
2007 2nd International Conference on Pervasive Computing and Applications, 2007
IRJET, 2021
2006 18th IEEE International Conference on Tools with Artificial Intelligence (ICTAI'06), 2006
IIUM Engineering Journal, 2019
Lecture Notes in Computer Science, 2014
Periodicals of Engineering and Natural Sciences (PEN)