Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2001
Quiet Calls is a technology allowing mobile telephone users to respond to telephone conversations without talking aloud. QC-Hold, a Quiet Calls prototype, combines three buttons for responding to calls with a PDA/mobile phone unit to silently send pre-recorded audio directly into the phone. This permits a mixed-mode communication where callers in public settings use a quiet means of communication, and other callers experience a voice telephone call. An evaluation of QC-Hold shows that it is easily used and suggests ways in which Quiet Calls offers a new form of communication, extending the choices offered by synchronous phone calling and asynchronous voicemail.
Proceedings of the Sigchi Conference on Human Factors in Computing Systems, 2001
Quiet Calls is a technology allowing mobile telephone users to respond to telephone conversations without talking aloud. QC-Hold, a Quiet Calls prototype, combines three buttons for responding to calls with a PDA/mobile phone unit to silently send pre-recorded audio directly into the phone. This permits a mixed-mode communication where callers in public settings use a quiet means of communication, and other callers experience a voice telephone call. An evaluation of QC-Hold shows that it is easily used and suggests ways in which Quiet Calls offers a new form of communication, extending the choices offered by synchronous phone calling and asynchronous voicemail.
Proceedings of the SIGCHI conference on Human factors in computing systems - CHI '01, 2001
Quiet Calls is a technology allowing mobile telephone users to respond to telephone conversations without talking aloud. QC-Hold, a Quiet Calls prototype, combines three buttons for responding to calls with a PDA/mobile phone unit to silently send pre-recorded audio directly into the phone. This permits a mixed-mode communication where callers in public settings use a quiet means of communication, and other callers experience a voice telephone call. An evaluation of QC-Hold shows that it is easily used and suggests ways in which Quiet Calls offers a new form of communication, extending the choices offered by synchronous phone calling and asynchronous voicemail.
2007
Abstract Advances in mobile communication technologies have allowed people in more places to reach each other more conveniently than ever before. However, many mobile phone communications occur in inappropriate contexts, disturbing others in close proximity, invading personal and corporate privacy, and more broadly breaking social norms. This paper presents a telephony system that allows users to answer calls quietly and privately without speaking.
In this work, we have propose a novel design for a basic mobile phone, which is focused on the essence of mobile communication and connectivity, based on a silent speech interface and auditory feedback. This assistive interface takes the advantages of voice control systems while discarding its disadvantages such as the background noise, privacy and social acceptance. The proposed device utilizes low-cost and commercially available hardware components. Thus, it would be affordable and accessible by majority of users including disabled, elderly and illiterate people.
Proceedings of the ninth international conference on Multimodal interfaces - ICMI '07, 2007
Advances in mobile communication technologies have allowed people in more places to reach each other more conveniently than ever before. However, many mobile phone communications occur in inappropriate contexts, disturbing others in close proximity, invading personal and corporate privacy, and more broadly breaking social norms. This paper presents a telephony system that allows users to answer calls quietly and privately without speaking. The paper discusses the iterative process of design, implementation and system evaluation. The resulting system is a VoIP-based telephony system that can be immediately deployed from any phone capable of sending DTMF signals. Observations and results from inserting and evaluating this technology in realworld business contexts through two design cycles of the Touch-Talk feature are reported.
2006
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.Includes bibliographical references (p. 61-63).Current mobile technology works well to connect individuals together at any time or place. However, general focus on one-to-one conversations has overlooked the potential of always-on group and community links. I hypothesize that asynchronous persistent audio is a superior medium to support scalable always-on group communication for mobile devices. To evaluate this claim, one must first have an adequate interaction design before its possible to investigate the qualities and usage patterns over the long-term. This design does not exist for mobile devices. This thesis takes the first step in this direction by creating and evaluating an initial design called RadioActive. RadioActive is a technological and interaction design for persistent mobile audio chat spaces, focusing on the key issue of navigating asynch...
2006
Current mobile technology works well to connect individuals together at any time or place. However, general focus on one-to-one conversations has overlooked the potential of always-on group and community links. I hypothesize that asynchronous persistent audio is a superior medium to support scalable always-on group communication for mobile devices. To evaluate this claim, one must first have an adequate interaction design before its possible to investigate the qualities and usage patterns over the longterm. This design does not exist for mobile devices. This thesis takes the first step in this direction by creating and evaluating an initial design called RadioActive. RadioActive is a technological and interaction design for persistent mobile audio chat spaces, focusing on the key issue of navigating asynchronous audio. If RadioActive is shown to be a good design in the long-term, I hope to prove with additional studies the value of asynchronous persistent audio. Thank you Judith, my advisor, for helping guide me through this process, teaching me visual design, and bringing me back down to earth. Thank you Chris, your audio and practical wisdom greatly helped orient me. Thank you Walter, you always are willing to provide sage advice and critique on anything both big and small. Thank you Francis, Christine, Orkan, Fernanda, and Scott. I've always been so impressed with the quality of thought from the Sociable Media Group. Thank you Sajid, Jon, and the rest of the crazies from the lab. Zero-chi baby! Thank you Federico for the opportunities, and of course the ham and wine! Thank you Greg, Erik and Erics,
2010
Newport is a collaborative application for sharing context (e.g. location) and content (e.g. photos and notes) during mobile phone calls. People can share during a phone call and sharing ends when the call ends. Newport also supports using a computer during a call to make it easier to share content from the phone or launch screen sharing if the caller is also at a computer. We describe Newport's system design and a formative evaluation with 12 participants to study their experience using Newport to share location, receive directions, share photos, and perform desktop sharing. Participants preferred using Newport to current methods for these tasks. They also preferred limiting sharing location to phone calls compared with publishing it continuously. Tying sharing to a phone call gives individuals a social sense of security, providing a mechanism for exchanging information with unknown people.
2012 IEEE Second International Conference on Consumer Electronics - Berlin (ICCE-Berlin), 2012
The spread of Voice over Internet Protocol (VoIP) services, equipment and clients is transforming telephony worldwide. In addition to providing inexpensive, or even free, international telephone calls there is potentially additional benefit in using computer networks to facilitate telephony. Currently hearing impaired and deaf users are excluded from these VoIP services; unless the message is in text form to begin with the hearing impaired user cannot access these services effectively. The VoIPText project, funded by Ofcom, looked at the feasibility of utilizing "speech to text" software in order to generate text from natural speech over VoIP and so improve accessibility. The project has assessed the accuracy of current state of the art Automatic Speech Recognition (ASR) software to assess if it achieves a reasonable level of performance. Assessments of software developed during the project were carried out in peoples' homes over eight weeks and the trials indicated some issues associated with the implementation but showed real potential for ASR in telecommunications for deaf people.
2012
Abstract ForcePhone is a mobile synchronous haptic communication system. During phone calls, users can squeeze the side of the device and the pressure level is mapped to vibrations on the recipient's device. The pressure/vibrotactile messages supported by ForcePhone are called pressages. Using a lab-based study and a small field study, this paper addresses the following questions: how can haptic interpersonal communication be integrated into a standard mobile device?
2008
Even though natural language voice-only input applications may be quite successful in desktop or office environments, voice may not be an appropriate input modality in some mobile situations. This necessitates the addition of a secondary input modality, ideally one with which users can express the same amount of content as they can with natural language using a similar amount of effort. The work presented here describes one possible solution to this problem - leveraging existing help mechanisms in the voice-only application to support an additional non-voice input modality, in our case text input. The user can choose the speech or text modality according to their current situation (e.g. noisy environment) and have the same interaction experience.
CHI'04 extended abstracts on Human factors in …, 2004
Human Factors and Voice Interactive Systems, 1999
Lecture Notes in Computer Science, 2004
The goal of the present study is to introduce a speaking interface of mobile devices for speech impaired people. The latest devices (including PDAs with integrated telephone, Smartphones, Tablet PCs) possess numerous favorable features: small size, portability, considerably fast processor speed, increased storage size, telephony, large display and convenient development environment. Standardized easy-to-use speech I/O is missing, however. The majority of vocally handicapped users are elderly people who are often not familiar with computers. Many of them have other disorder(s) (e.g. motor) and/or impaired vision. The paper reports the design and implementation aspects of converting standard devices into a mobile speaking aid for face-toface and telephone conversations. The device can be controlled and text is input by touch-screen and the output is generated by a text-to-speech system. The interface is configurable (screen colors and text size, speaking options, etc.) according to the users' personal preferences.
We present a framework, CALL-SLT Lite, which can be used by people with only very basic software skills to create interactive multimodal speech-enabled CALL games suitable for beginner/low intermediate child language learners. The games are deployed over the web and can be accessed through a normal browser, and the framework is completely language-independent. As the name suggests, the framework grew out of an earlier platform, CALL-SLT, which enables construction of similar games but uses a more sophisticated architecture. We review the history of the project, describing the type of game we are aiming to build and our reasons for believing that they are useful, and then present CALL-SLT Lite and an initial evaluation comparing the performance of the two versions of the framework. The results suggest that the Lite framework, although much simpler, offers performance at least as good as that of the original system.
Nowadays whenever we are talking on a cell phone in a crowd, then actually we are not talking, we are yelling because of lots of disturbance and noise around us. However, there is no need to scream to convey our message and wasting our energy anymore.For this purpose a new technology known as the " Silent Sound Technology " has been introduced that will put an end to the noise pollution. The Silent sound technology is a perfect solution for those people who have lost their voice but wish to speak on mobile phones. It is developed at the Karlsruhe Institute of Technology and you can expect to see it in the near future. When this technology is used, it detects every lip movement and internally converts the electrical pulses into sounds signals and sends them neglecting all other surrounding noise. It is going to be really beneficial for the people who hate talking loudly on cell phones. " Silent Sound technology " aims to notice every movements of the lips and transform them into sounds, which could help people who lose voices to speak, and allow people to make silent calls without bothering others. Rather than making any sounds, your handset would decipher the movements your mouth makes by measuring muscle activity, then convert this into speech that the person on the other end of the call can hear. So, basically, it reads our lips. Another important benefit of this technology is that it allows you to communicate to any person in the world as the electrical pulse is universal, it can be converted into any language depending upon the users choice. This technology can be used for languages like English, French & German but not for languages like Chinese because different tones hold different meaning in Chinese language. This new technology will be very helpful whenever a person loses his voice while speaking or allow people to make silent calls without disturbing others, thus now we can speak anything with our friends or family in private without anyone eavesdropping. At the other end, the listener can hear a clear voice. This device works with 99% efficiency, and can been seen in the market in another 5-10 years and once launched it will have a drastic effect and with no doubt it will be widely used.
… Design and Evaluation for Mobile …, 2008
The use of a voice interface, along with textual, graphical, video, tactile, and audio interfaces, can improve the experience of the user of a mobile device. Many applications can benefit from voice input and output on a mobile device, including applications that provide travel directions, weather information, restaurant and hotel reservations, appointments and reminders, voice mail, and e-mail. We have developed a prototype system for a mobile device that supports client-side, voice-enabled applications. In fact, the prototype supports multimodal interactions but, here, we focus on voice interaction. The prototype includes six voice-enabled applications and a program manager that manages the applications. In this chapter we describe the prototype, including design issues that we faced, and evaluation methods that we employed in developing a voice-enabled user interface for a mobile device.
Lecture Notes in Computer Science, 2006
Speech and/or hearing impaired people have difficulties with voice communication. In case of face-to-face conversation they can find a common communication channel (e.g. sign language, paper, etc.), but without an appropriate system they are unable to talk over the phone. The goal of the present study is to introduce the design and development steps of a system for vocally and/or hearing impaired people, which helps them to communicate via telephone with any person. Speech output is realized by text-to-speech (TTS) technology and speech input is provided by automatic speech recognition (ASR). The visual and the speech user interfaces enable users on both side of the phone line (a speech and hearing impaired person at one end, a non-speechand-hearing-disabled person at the other end) to communicate.
Proceedings of the 27th …, 2009
In this paper we present a system that simulates urgency-augmented phone calls on mobile phones. Different scenarios and interaction techniques are discussed. We report a user study that indicates a general need for such a system and explored the applicability ...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.