Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2012, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
…
4 pages
1 file
Gesture is becoming an increasingly popular means of interacting with computers. However, it is still relatively costly to deploy robust gesture recognition sensors in existing mobile platforms. We present SoundWave, a technique that leverages the speaker and microphone already embedded in most commodity devices to sense in-air gestures around the device. To do this, we generate an inaudible tone, which gets frequency-shifted when it reflects off moving objects like the hand. We measure this shift with the microphone to infer various gestures. In this note, we describe the phenomena and detection algorithm, demonstrate a variety of gestures, and present an informal evaluation on the robustness of this approach across different devices and people.
2021
We introduce AirWare, an in-air hand-gesture recognition system that uses the already embedded speaker and microphone in most electronic devices, together with embedded infrared proximity sensors. Gestures identified by AirWare are performed in the air above a touchscreen or a mobile phone. AirWare utilizes convolutional neural networks to classify a large vocabulary of hand gestures using multi-modal audio Doppler signatures and infrared (IR) sensor information. As opposed to other systems which use high frequency Doppler radars or depth cameras to uniquely identify in-air gestures, AirWare does not require any external sensors. In our analysis, we use openly available APIs to interface with the Samsung Galaxy S5 audio and proximity sensors for data collection. We find that AirWare is not reliable enough for a deployable interaction system when trying to classify a gesture set of 21 gestures, with an average true positive rate of only 50.5% per gesture. To improve performance, we t...
Proceedings of the 27th annual ACM symposium on User interface software and technology - UIST '14, 2014
: Touch input is expressive but can occlude large parts of the screen (A). We propose a machine learning based algorithm for gesture recognition expanding the interaction space around the mobile device (B), adding in-air gestures and hand-part tracking (D) to commodity off-the-shelf mobile devices, relying only on the device's camera (and no hardware modifications). We demonstrate a number of compelling interactive scenarios including bi-manual input to mapping and gaming applications (C+D). The algorithm runs in real time and can even be used on ultra-mobile devices such as smartwatches (E).
2015 IEEE Conference on Computer Communications (INFOCOM), 2015
We present WiGest: a system that leverages changes in WiFi signal strength to sense in-air hand gestures around the user's mobile device. Compared to related work, WiGest is unique in using standard WiFi equipment, with no modifications, and no training for gesture recognition. The system identifies different signal change primitives, from which we construct mutually independent gesture families. These families can be mapped to distinguishable application actions. We address various challenges including cleaning the noisy signals, gesture type and attributes detection, reducing false positives due to interfering humans, and adapting to changing signal polarity. We implement a proof-of-concept prototype using off-the-shelf laptops and extensively evaluate the system in both an office environment and a typical apartment with standard WiFi access points. Our results show that WiGest detects the basic primitives with an accuracy of 87.5% using a single AP only, including through-the-wall non-line-of-sight scenarios. This accuracy increases to 96% using three overheard APs. In addition, when evaluating the system using a multi-media player application, we achieve a classification accuracy of 96%. This accuracy is robust to the presence of other interfering humans, highlighting WiGest's ability to enable future ubiquitous hands-free gesturebased interaction with mobile devices.
2011
Touch sensing and computer vision have made humancomputer interaction possible in environments where keyboards, mice, or other handheld implements are not available or desirable. However, the high cost of instrumenting environments limits the ubiquity of these technologies, particularly in home scenarios where cost constraints dominate installation decisions. Fortunately, home environments frequently offer a signal that is unique to locations and objects within the home: electromagnetic noise. In this work, we use the body as a receiving antenna and leverage this noise for gestural interaction. We demonstrate that it is possible to robustly recognize touched locations on an uninstrumented home wall using no specialized sensors. We conduct a series of experiments to explore the capabilities that this new sensing modality may offer. Specifically, we show robust classification of gestures such as the position of discrete touches around light switches, the particular light switch being touched, which appliances are touched, differentiation between hands, as well as continuous proximity of hand to the switch, among others. We close by discussing opportunities, limitations, and future work.
2007 2nd International Conference on Pervasive Computing and Applications, 2007
This research looks at one approach to providing mobile phone users with a simple low cost real time user interface allowing them to control highly interactive public space applications involving one user or a large number of simultaneous users. In order to sense accurately the real time hand movement gestures of mobile phone users the method uses miniature accelerometers that send the orientation signals over the networks audio channel to a central computer for signal processing and application delivery. This affords that there is minimal delay, minimal connection protocol incompatibility and minimal mobile phone type or version discrimination. Without the need for mass user compliance, large numbers of users could begin to control public space cultural and entertainment applications using simple gesture movements.
2017
For four decades, researchers have been creating flute hyperinstruments, either by mounting sensors to the flute body or human body, or by creating wind-like instruments embedded with sensors. Primarily, the desire has been to extend and/or enhance an artists performance technique. More recent technologies provide near real-time interaction, rich datasets and are getting easier, faster, and cheaper to use; therefore, previous flute hyperinstruments are becoming more obsolete. A motivation for designing this newest flute hyperinstrument stems from a desire to gain a deeper understanding and awareness of the performative gesture features that occur during flute performance. This paper presents the extensive iterative process of upgrading previous components towards the motivation. The acquired gesture features are used as part of a larger project 1) to improve real-time, audio-only signal processing techniques and 2) to gain an understanding of ancillary gesture features present durin...
2016 International Conference on Development and Application Systems (DAS), 2016
Gesture interaction is now available on a variety of devices that expose touch and/or free-hand gesture interfaces. However, users' capacity to interact efficiently with computing systems using gestures is still to be understood. To this end, the community needs appropriate software tools to record users' gesture input in both experimental and live settings. In this work, we present three software tools that capture and record users' touch and free-hand gesture input on mobile devices. These tools make together a general and unique system that can collect precise data about users' free-hand and touch gestures. The variety of gesture data and gesture measurements reported by our tools will enable practitioners and researchers to form a thorough understanding about their users' performance during gesture interaction with mobile devices. Our tools are free to download and use to collect users' gesture performance on mobile devices in various experimental settings.
Proceedings of the 13th International Joint Conference on Computational Intelligence
Due to the mass advancement in ubiquitous technologies nowadays, new pervasive methods have come into the practice to provide new innovative features and stimulate the research on new human-computer interactions. This paper presents a hand gesture recognition method that utilizes the smartphone's built-in speakers and microphones. The proposed system emits an ultrasonic sonar-based signal (inaudible sound) from the smartphone's stereo speakers, which is then received by the smartphone's microphone and processed via a Convolutional Neural Network (CNN) for Hand Gesture Recognition. Data augmentation techniques are proposed to improve the detection accuracy and three dual-channel input fusion methods are compared. The first method merges the dual-channel audio as a single input spectrogram image. The second method adopts early fusion by concatenating the dual-channel spectrograms. The third method adopts late fusion by having two convectional input branches processing each of the dual-channel spectrograms and then the outputs are merged by the last layers. Our experimental results demonstrate a promising detection accuracy for the six gestures presented in our publicly available dataset with an accuracy of 93.58% as a baseline.
Motion gestures are an underutilized input modality for mobile interaction, despite numerous potential advantages. Negulescu et al. found that the lack of feedback on attempted motion gestures made it difficult for participants to diagnose and correct errors, resulting in poor recognition performance and user frustration. Here, we describe and evaluate a training and feedback system consisting of two techniques that use audio characteristics to provide: (1) a spatial representation of the desired gesture and (2) feedback on the system's interpretation of user input. Results show that while both techniques provide adequate feedback, users prefer continuous feedback.
Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, 2019
Figure 1: (a) AudioTouch is a micro-gesture recognition approach based on active bio-acoustic sensing without requiring any instrumentation on users' fingers or palm. (b) It recognizes micro-gestures with small differences among various finger gestures. (c+d) It also allows for discrimination of force, further expanding interaction vocabulary. (e) This approach enables several compelling application scenarios such as device-free input in mobile scenarios.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Journal on Multimodal User Interfaces, 2017
IEEE Access, 2019
IEEE Sensors Journal, 2019
Proceedings of the 9th ACM Conference on Embedded Networked Sensor Systems, 2011
Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, 2017
22nd International Conference on Human-Computer Interaction with Mobile Devices and Services, 2020
ArXiv, 2021
Proceedings of the 17th annual international symposium on International symposium on wearable computers - ISWC '13, 2013
Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology
Proceedings of the 35th Annual Computer Security Applications Conference, 2019
IBM Systems Journal, 2000
2013 IEEE 11th International New Circuits and Systems Conference (NEWCAS), 2013
2018 IEEE Symposium on Security and Privacy (SP), 2018
IEEE Access
International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2019