Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2015, 2015 IEEE Conference on Computer Communications (INFOCOM)
We present WiGest: a system that leverages changes in WiFi signal strength to sense in-air hand gestures around the user's mobile device. Compared to related work, WiGest is unique in using standard WiFi equipment, with no modifications, and no training for gesture recognition. The system identifies different signal change primitives, from which we construct mutually independent gesture families. These families can be mapped to distinguishable application actions. We address various challenges including cleaning the noisy signals, gesture type and attributes detection, reducing false positives due to interfering humans, and adapting to changing signal polarity. We implement a proof-of-concept prototype using off-the-shelf laptops and extensively evaluate the system in both an office environment and a typical apartment with standard WiFi access points. Our results show that WiGest detects the basic primitives with an accuracy of 87.5% using a single AP only, including through-the-wall non-line-of-sight scenarios. This accuracy increases to 96% using three overheard APs. In addition, when evaluating the system using a multi-media player application, we achieve a classification accuracy of 96%. This accuracy is robust to the presence of other interfering humans, highlighting WiGest's ability to enable future ubiquitous hands-free gesturebased interaction with mobile devices.
IEEE Sensors Journal, 2019
This paper introduces Wisture, a new online machine learning solution for recognizing touch-less hand gestures on a smartphone (mobile device). Wisture relies on the standard Wi-Fi Received Signal Strength (RSS) measurements, Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) learning method, thresholding filters, and a traffic induction approach. Unlike other Wi-Fi based gesture recognition methods, the proposed method does not require a modification of the device hardware or the operating system and performs the gesture recognition without interfering with the normal operation of other smartphone applications. We discuss the characteristics of Wisture and conduct extensive experiments to compare the performance of the RNN learning method against state-of-theart machine learning solutions regarding both accuracy and efficiency. The experiments include a set of different scenarios with a change in spatial setup and network traffic between the smartphone and Wi-Fi access points (AP). The results show that Wisture achieves an online gesture recognition accuracy of up to 93% (average 78%) in detecting and classifying three gestures.
IJSRD, 2013
This paper presents WiSee, a novel gesture recognition system that leverages wireless signals (e.g., Wi-Fi) to enable whole-home sensing and recognition of human gestures. Since wireless signals do not require line-of-sight and can traverse through walls, WiSee can enable whole- home gesture recognition using few wireless sources. Further, it achieves this goal without requiring instrumentation of the human body with sensing devices.
Purpose-Recently, many researches have been devoted to studying the possibility of using wireless signals of the Wi-Fi networks in human-gesture recognition. They focus on classifying gestures despite who is performing them, and only a few of the previous work make use of the wireless channel state information in identifying humans. This paper aims to recognize different humans and their multiple gestures in an indoor environment. Design/methodology/approach-The authors designed a gesture recognition system that consists of channel state information data collection, preprocessing, features extraction and classification to guess the human and the gesture in the vicinity of a Wi-Fi-enabled device with modified Wi-Fi-device driver to collect the channel state information, and process it in real time. Findings-The proposed system proved to work well for different humans and different gestures with an accuracy that ranges from 87 per cent for multiple humans and multiple gestures to 98 per cent for individual humans' gesture recognition. Originality/value-This paper used new preprocessing and filtering techniques, proposed new features to be extracted from the data and new classification method that have not been used in this field before.
ACM Transactions on Sensor Networks
Gesture detection based on RF signals has gained increasing popularity in recent years due to several benefits it has brought such as eliminating the need to carry additional devices and providing better privacy. In traditional methods, significant breakthroughs have been made to improve recognition accuracy and scene robustness, but the limited computing power of edge devices (the first-level equipment to receive signals) and the requirement of fast response for detection have not been adequately addressed. In this paper, we propose a lightweight Wi-Fi gesture recognition system, referred to as WiFine, which is designed and implemented for deployment on low-end edge devices without the use of any additional high-performance services in the process. Towards these goals, we first design algorithms for phase difference selection and amplitude enhancement, respectively, to tackle the problem of data drift caused by user change. Then, we design a cross-dimension fusion method to extract...
This Exploratory Survey paper explore the basic principle behind WiFi oriented Gesture Control System. The paper briefly provided the literature review about this latest technology. This technology having vast applications in real time situation like in Gaming, Home automation, Medicine for disabled & latest electronic gadgets. The researcher from University of Washington has done a milestone work for this technology. It will be expected that in 2020 era, the WiFi based Gesture Control & Recognition system replace all other Man-Machine interface methods.
ArXiv, 2017
This paper introduces Wisture, a new online machine learning solution for recognizing touch-less dynamic hand gestures on a smartphone. Wisture relies on the standard Wi-Fi Received Signal Strength (RSS) using a Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN), thresholding filters and traffic induction. Unlike other Wi-Fi based gesture recognition methods, the proposed method does not require a modification of the smartphone hardware or the operating system, and performs the gesture recognition without interfering with the normal operation of other smartphone applications. We discuss the characteristics of Wisture, and conduct extensive experiments to compare its performance against state-of-the-art machine learning solutions in terms of both accuracy and time efficiency. The experiments include a set of different scenarios in terms of both spatial setup and traffic between the smartphone and Wi-Fi access points (AP). The results show that Wisture achieves an online rec...
2014 IEEE International Conference on Pervasive Computing and Communications (PerCom), 2014
We investigate the use of WiFi Received Signal Strength Information (RSSI) at a mobile phone for the recognition of situations, activities and gestures. In particular, we propose a device-free and passive activity recognition system that does not require any device carried by the user and uses ambient signals. We discuss challenges and lessons learned for the design of such a system on a mobile phone and propose appropriate features to extract activity characteristics from RSSI. We demonstrate the feasibility of recognising activities, gestures and environmental situations from RSSI obtained by a mobile phone. The case studies were conducted over a period of about two months in which about 12 hours of continuous RSSI data was sampled, in two countries and with 11 participants in total. Results demonstrate the potential to utilise RSSI for the extension of the environmental perception of a mobile device as well as for the interaction with touch-free gestures. The system achieves an accuracy of 0.51 while distinguishing as many as 11 gestures and can reach 0.72 on average for four more disparate ones.
2021
We introduce AirWare, an in-air hand-gesture recognition system that uses the already embedded speaker and microphone in most electronic devices, together with embedded infrared proximity sensors. Gestures identified by AirWare are performed in the air above a touchscreen or a mobile phone. AirWare utilizes convolutional neural networks to classify a large vocabulary of hand gestures using multi-modal audio Doppler signatures and infrared (IR) sensor information. As opposed to other systems which use high frequency Doppler radars or depth cameras to uniquely identify in-air gestures, AirWare does not require any external sensors. In our analysis, we use openly available APIs to interface with the Samsung Galaxy S5 audio and proximity sensors for data collection. We find that AirWare is not reliable enough for a deployable interaction system when trying to classify a gesture set of 21 gestures, with an average true positive rate of only 50.5% per gesture. To improve performance, we t...
ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Gesture recognition using wireless sensing opened a plethora of applications in the field of human-computer interaction. However, most existing works are not robust without requiring wearables or tedious training/calibration. In this work, we propose WiGRep, a time reversal based gesture recognition approach using Wi-Fi, which can recognize different gestures by counting the number of repeating gesture segments. Built upon the time reversal phenomenon in RF transmission, the Time Reversal Resonating Strength (TRRS) is used to detect repeating patterns in a gesture. A robust low-complexity algorithm is proposed to accommodate possible variations of gestures and indoor environments. The main advantages of WiGRep are that it is calibration-free and location and environment independent. Experiments performed in both line of sight and non-line-of-sight scenarios demonstrate a detection rate of 99.6% and 99.4%, respectively, for a fixed false alarm rate of 5%.
2019 IEEE Integrated STEM Education Conference (ISEC), 2019
WiFi and security pose both an issue and act as a growing presence in everyday life. Today's motion detection implementations are severely lacking in the areas of secrecy, scope, and cost. To combat this problem, we aim to develop a motion detection system that utilizes WiFi Channel State Information (CSI), which describes how a wireless signal propagates from the transmitter to the receiver. The goal of this study is to develop a realtime motion detection and classification system that is discreet, cost-effective, and easily implementable. The system would only require an Ubuntu laptop with an Intel Ultimate N WiFi Link 5300 and a standard router. The system will be developed in two parts: (1) a robust system to track CSI variations in real-time, and (2) an algorithm to classify the motion. The system used to track CSI variance in real-time was completed in August 2018. Initial results show that introduction of motion to a previously motionless area is detected with high confidence. We present the development of (1) anomaly detection, utilizing the moving average filter implemented in the initial program and/or unsupervised machine learning, and (2) supervised machine learning algorithms to classify a set of simple motions using a proposed feature extraction methods. Lastly, classification methods such as Decision Tree, Naive Bayes, and Long Short-Term Memory can be used to classify basic actions regardless of speed, location, or orientation.
Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, 2015
Keystroke privacy is critical for ensuring the security of computer systems and the privacy of human users as what being typed could be passwords or privacy sensitive information. In this paper, we show for the first time that WiFi signals can also be exploited to recognize keystrokes. The intuition is that while typing a certain key, the hands and fingers of a user move in a unique formation and direction and thus generate a unique pattern in the time-series of Channel State Information (CSI) values, which we call CSI-waveform for that key. In this paper, we propose a WiFi signal based keystroke recognition system called WiKey. WiKey consists of two Commercial Off-The-Shelf (COTS) WiFi devices, a sender (such as a router) and a receiver (such as a laptop). The sender continuously emits signals and the receiver continuously receives signals. When a human subject types on a keyboard, WiKey recognizes the typed keys based on how the CSI values at the WiFi signal receiver end. We implemented the WiKey system using a TP-Link TL-WR1043ND WiFi router and a Lenovo X200 laptop. WiKey achieves more than 97.5% detection rate for detecting the keystroke and 96.4% recognition accuracy for classifying single keys. In real-world experiments, WiKey can recognize keystrokes in a continuously typed sentence with an accuracy of 93.5%.
2014
This paper presents a recognition scheme for fine-grain gestures. The scheme leverages directional antenna and shortrange wireless propagation properties to recognize a vocabulary of action-oriented gestures from the American Sign Language. Since the scheme only relies on commonly available wireless features such as Received Signal Strength (RSS), signal phase differences, and frequency subband selection, it is readily deployable on commercial-off-the-shelf IEEE 802.11 devices. We have implemented the proposed scheme and evaluated it in two potential application scenarios: gesture-based electronic activation from wheelchair and gesture-based control of car infotainment system. The results show that the proposed scheme can correctly identify and classify up to 25 fine-grain gestures with an average accuracy of 92% for the first application scenario and 84% for the second scenario.
Device-free human gesture recognition (HGR) using commercial off the shelf (COTS) Wi-Fi devices has gained attention with recent advances in wireless technology. HGR recognizes the human activity performed, by capturing the reflections of Wi-Fi signals from moving humans and storing them as raw channel state information (CSI) traces. Existing work on HGR applies noise reduction and transformation to pre-process the raw CSI traces. However, these methods fail to capture the non-Gaussian information in the raw CSI data due to its limitation to deal with linear signal representation alone. The proposed higher order statistics-based recognition (HOS-Re) model extracts higher order statistical (HOS) features from raw CSI traces and selects a robust feature subset for the recognition task. HOS-Re addresses the limitations in the existing methods, by extracting third order cumulant features that maximizes the recognition accuracy. Subsequently, feature selection methods derived from information theory construct a robust and highly informative feature subset, fed as input to the multilevel support vector machine (SVM) classifier in order to measure the performance. The proposed methodology is validated using a public database SignFi, consisting of 276 gestures with 8280 gesture instances, out of which 5520 are from the laboratory and 2760 from the home environment using a 10 × 5 cross-validation. HOS-Re achieved an average recognition accuracy of 97.84%, 98.26% and 96.34% for the lab, home and lab + home environment respectively. The average recognition accuracy for 150 sign gestures with 7500 instances, collected from five different users was 96.23% in the laboratory environment.
2007 2nd International Conference on Pervasive Computing and Applications, 2007
This research looks at one approach to providing mobile phone users with a simple low cost real time user interface allowing them to control highly interactive public space applications involving one user or a large number of simultaneous users. In order to sense accurately the real time hand movement gestures of mobile phone users the method uses miniature accelerometers that send the orientation signals over the networks audio channel to a central computer for signal processing and application delivery. This affords that there is minimal delay, minimal connection protocol incompatibility and minimal mobile phone type or version discrimination. Without the need for mass user compliance, large numbers of users could begin to control public space cultural and entertainment applications using simple gesture movements.
This Exploratory Survey paper explore the basic principle behind WiFi oriented Gesture Control System.
Ubiquitous computing aims to seamlessly integrate computing into our daily lives, and requires reliable information on human activities and state for various applications. In this paper, we propose a device-free human activity recognition system that leverages the rich information behind WiFi signals to detect human activities in indoor environments, including walking, sitting, and standing. The key idea of our system is to use the dynamic features of activities, which we carefully examine and analyze through the characteristics of channel state information. We evaluate the impact of location changes on WiFi signal distribution for different activities and design an activity detection system that employs signal processing techniques to extract discriminative features from wireless signals in the frequency and temporal domains. We implement our system on a single off-the-shelf WiFi device connecting to a commercial wireless access point and evaluate it in laboratory and conference room environments. Our experiments demonstrate the feasibility of using WiFi signals for device-free human activity recognition, which could provide a practical and non-intrusive solution for indoor monitoring and ubiquitous computing applications.
Proceedings of the 27th annual ACM symposium on User interface software and technology - UIST '14, 2014
: Touch input is expressive but can occlude large parts of the screen (A). We propose a machine learning based algorithm for gesture recognition expanding the interaction space around the mobile device (B), adding in-air gestures and hand-part tracking (D) to commodity off-the-shelf mobile devices, relying only on the device's camera (and no hardware modifications). We demonstrate a number of compelling interactive scenarios including bi-manual input to mapping and gaming applications (C+D). The algorithm runs in real time and can even be used on ultra-mobile devices such as smartwatches (E).
2011
Touch sensing and computer vision have made humancomputer interaction possible in environments where keyboards, mice, or other handheld implements are not available or desirable. However, the high cost of instrumenting environments limits the ubiquity of these technologies, particularly in home scenarios where cost constraints dominate installation decisions. Fortunately, home environments frequently offer a signal that is unique to locations and objects within the home: electromagnetic noise. In this work, we use the body as a receiving antenna and leverage this noise for gestural interaction. We demonstrate that it is possible to robustly recognize touched locations on an uninstrumented home wall using no specialized sensors. We conduct a series of experiments to explore the capabilities that this new sensing modality may offer. Specifically, we show robust classification of gestures such as the position of discrete touches around light switches, the particular light switch being touched, which appliances are touched, differentiation between hands, as well as continuous proximity of hand to the switch, among others. We close by discussing opportunities, limitations, and future work.
International Journal of Electrical and Computer Engineering (IJECE), 2025
Wireless sensing has emerged as a dynamic field with diverse applications across smart cities, healthcare, the internet of things (IoT), and virtual reality gaming. This burgeoning area capitalizes on the capacity to detect locations, activities, gestures, and vital signs by assessing their impact on ambient wireless signals. This review critically examines the prevailing challenges within wireless sensing and predicts future research trajectories. Even with the potential for nuanced signal processing facilitated by Wi-Fi propagation, its efficacy is impeded by noise interference in confined areas during transmission and reception. Consequently, this work aims to augment signal processing performance accuracy by delving into the most promising techniques and underexplored methods utilizing channel state information (CSI). Furthermore, the work offers a view into the potential of human activity recognition predicated on CSI properties. The study focusses on exploring location-independent sensing technique based on CSI, discussing relevant considerations and contemporary approaches used in Wi-Fi sensing tasks. The optimal practices in analysis are based on model design, data collection, and result interpretation. The discussions analysis investigates in detail the representative applications and outlines the major considerations of developing human activity recognition human activity recognition (HAR) based on Wi-Fi by analyzing the current critical issues of CSI-based behavior recognition methods and pointing out possible future research directions.
2018 27th International Conference on Computer Communication and Networks (ICCCN), 2018
Accurate human gesture recognition is becoming a cornerstone for myriad emerging applications in humancomputer interaction. Existing gesture recognition systems either require dedicated extra infrastructure or users active cooperation. Although some WiFi-enabled gesture recognition systems have been proposed, they are vulnerable to environmental dynamics and rely on the tedious data re-labeling and expert knowledge each time being implemented in a new environment. In this paper, we propose a WiFi-enabled device-free adaptive gesture recognition scheme, WiADG, that is able to identify human gestures accurately and consistently under environmental dynamics via adversarial domain adaptation. Firstly, a novel OpenWrt-based IoT platform is developed, enabling the direct collection of Channel State Information (CSI) measurements from commercial IoT devices. After constructing an accurate source classifier with labeled source CSI data via the proposed convolutional neural network in the source domain (original environment), we design an unsupervised domain adaptation scheme to reduce the domain discrepancy between the source and the target domain (new environment) and thus improve the generalization performance of the source classifier. The domainadversarial objective is to train a generator (target encoder) to map the unlabeled target data to a domain invariant latent feature space so that a domain discriminator cannot distinguish the domain labels of the data. In the phase of implementation, we utilize the trained target encoder to map the target CSI frame to the latent feature space and use the source classifier to identify various gestures performed by the user. We implement WiADG on commercial WiFi routers and conduct experiments in multiple indoor environments. The results validate that WiADG achieves 98% gesture recognition accuracy in the original environment. Furthermore, the proposed unsupervised adversarial domain adaptation is able to enhance the recognition accuracy of WiADG by 25% on average without the needs of labeled data collection and new classifier generation when implements it in new environments.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.