Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2019
…
5 pages
1 file
Based on the ontology developed in the ongoing SUITCEYES EU-funded project to bridge visual analytics for situational awareness and navigation with semantic labelling of environmental cues, we designed a set of dynamic haptograms to represent concepts for two-way communication between deafblind and non-deafblind users. A haptogram corresponds to a tactile symbol drawn over a touchscreen, its dynamic nature referring to the act of writing or drawing, where the touchscreen can take several forms, including a smart textile screen designated for specific areas on the body. In its current version, our haptogram set is generated over a 4x4 matrix of cells and is displayed on the back of the user, tested for robustness at the receiving end. The concepts and concept sequences simulating simple questions and answers represented by dynamic haptograms are focused on ontology content for now but can be scaled up.
For visually impaired users, making sense of spatial information is difficult as they have to scan and memorize content before being able to analyze it. Even worse, any update to the displayed content invalidates their spatial memory, which can force them to manually rescan the entire display. Making display contents persist, we argue, is thus the highest priority in designing a sensemaking system for the visually impaired. We present a tactile display system designed with this goal in mind. The foundation of our system is a large tactile display (140x100cm, 23x larger than Hyper-braille), which we achieve by using a 3D printer to print raised lines of filament. The system's software then uses the large space to minimize screen updates. Instead of panning and zooming, for example, our system creates additional views, leaving display contents intact and thus preserving user's spatial memory. We illustrate our system and its design principles at the example of four spatial applications. We evaluated our system with six blind users. Participants responded favorably to the system and expressed, for example, that having multiple views at the same time was helpful. They also judged the increased expressiveness of lines over the more traditional dots as useful for encoding information.
Technology is advancing at a rapid rate, but blind and visually impaired people are at risk of being left behind due to the rapid digitization of information. The Blind Pad is therefore intended to bridge this gap and help these people perceive digital and graphical information. The Blind Pad project started in 2018 as an inter-departmental project across the University of Bath (UOB), which includes the Department of Electronic and Electrical Engineering (EEE). The EEE department coordinates with other departments, such as the Psychology department, to ensure that the Blind Pad offers a holistic benefit to its user. My role throughout this project was to improve on the first prototype of the Blind Pad by investigating methods to increase its tactile pixel resolution, making the Blind Pad more portable, and through other improvements. The Blind Pad should eventually lead to an improvement in social inclusion and employment prospects for its user. Through simulation using ANSYS and practical testing, this project showed that multiple tactile pixels that are incorporated in the Blind Pad could potentially be controlled in a new and more efficient way. Additionally, this project led to the fabrication of a printed circuit board (PCB) that would make the Blind Pad more compact and portable. This report therefore outlines the context of this project, its associated literature review, and the methodology used throughout the project. This report also includes simulation and testing results and their relevant analysis.
Universal Access in the Information Society
Purpose Digital media has brought a revolution, making the world a global village. For people who are visually impaired and people with visual and hearing impairment, navigating through the digital world can be as precarious as moving through the real world. To enable them to connect with the digital world, we propose a solution, Haptic Encoded Language Framework (HELF), that uses haptic technology to enable them to write digital text using swiping gestures and understand the text through vibrations. Method We developed an Android application to present the concept of HELF and evaluate its performance. We tested the application on 13 users (five visually impaired and eight sighted individuals). Results The preliminary exploratory analysis of the proposed framework using the Android application developed reveals encouraging results. Overall, the reading accuracy has been found to be approximately 91%, and the average CPM is found to be 25.7. Conclusion The volunteering users of the HELF Android application found it useful as a means of using the digital media and recommended its usage as an assistive technology for the visually challenged. The results of their performance of using the application motivate further research and development in the proposed work to make HELF more usable by people who are visually impaired and people with visual and hearing impairment.
Studia Informatica, 2023
Nowadays, users need to work simultaneously with many files and folders on their computer and various resources on the Internet, including several software pieces. It means that they have many windows open simultaneously, and they must continuously audiovisual programs. These problems are also crucial to blind or visually impaired users, for whom this issue is even more difficult. Due to the nature of their work, i.e., they do not use the mouse and cannot see the video and graphic content, they have to remember, at each moment, which programs they have open and which they no longer use. In this paper, we propose an approach that consists of universal software, where, on the one hand, the user can collect and view data; on the other hand, he can produce ready-made output packages in simple web pages. This solution consists of two main parts: an offline software WPad and the online-Personal Knowledge Information System (PIKS). We propose a simplified and more automated version of an earlier developed WPad software system called WPad for BVI, which visually impaired persons and other beginners can use. The solution was tested by two BVI users (one BVI student and one BVI teacher) and several non-disabled teachers and students from Poland and Slovakia. Feedback from users was collected that will be used to improve the system further.
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction on - ICMI-MLMI '10, 2010
This paper presents research that shows that a high degree of skilled performance is required for multimodal discourse support. We discuss how students who are blind or visually impaired (SBVI) were able to understand the instructor's pointing gestures during planar geometry and trigonometry classes. For that, the SBVI must attend to the instructor's speech and have simultaneous access to the instructional graphic material, and to the where the instructor is pointing. We developed the Haptic Deictic System -HDS, capable of tracking the instructor's pointing and informing the SBVI, through a haptic glove, where she needs to move her hand understand the instructor's illustration-augmented discourse. Several challenges had to be overcome before the SBVI were able to engage in fluid multimodal discourse with the help of the HDS. We discuss how such challenges were addressed with respect to perception and discourse (especially to mathematics instruction).
Modelling, Measurement and Control C
Through screen-reader and Braille display, trained blind persons can nowadays manage to access to a lot of activities using computers. However, graphical interfaces and content where the spatial dimension is essential for understanding, like charts, pictures or the majority of videogames, are remaining hardly accessible. The Tactos and Intertact.net technologies are aimed to overcome these limits by providing an efficient sensory supplementation technology enabling blind users to access the spatial dimension of content through touch. Following a participatory design approach, we have worked in cooperation with blind persons to develop a learning environment for touch access to digital content with Tactos. Adoption is important when it comes to develop technologies and we report here on the research we conduct for enabling an independent learning of our system by blind persons. From our perspective, this possibility is a cornerstone for the development of a users' community.
Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology
We present TangibleGrid, a novel device that allows blind users to understand and design the layout of a web page with real-time tangible feedback. We conducted semi-structured interviews and a series of co-design sessions with blind users to elicit insights that guided the design of TangibleGrid. Our final prototype contains shape-changing brackets representing the web elements and a baseboard representing the web page canvas. Blind users can design a web page layout through creating and editing web elements by snapping or adjusting tangible brackets on top of the baseboard. The baseboard senses the brackets' type, size, and location, verbalizes the information, and renders the web page on the client browser. Through a formative user study, we found that blind users could understand a web page layout through TangibleGrid. They were also able to design a new web layout from scratch without the help of sighted people. CCS CONCEPTS • Human-centered computing → Accessibility systems and tools.
Lecture Notes in Computer Science, 2007
Abstract. Sign languages are visual-gestural languages developed main-ly in deaf communities; their tempo-spatial nature makes it difficult to write them, yet several transcription systems are available for them. Most sign languages dictionaries interact with their users ...
Proceedings of the 8th international conference on Multimodal interfaces - ICMI '06, 2006
Students who are blind are typically one to three years behind their seeing counterparts in mathematics and science. We posit that a key reason for this resides in the inability of such students to access multimodal embodied communicative behavior of mathematics instructors. This impedes the ability of blind students and their teachers to maintain situated communication.
The new Information and Communication Technologies (ICTs) have come to be of great importance in facilitating the communication and, subsequently, the access to information and knowledge, thereby improving the living conditions of people in general and especially those with special needs. In this project we present Symbolum a tool that uses ICT to aid communication of individuals with Cerebral Palsy. Although cerebral palsy manifest itself in different ways, affecting different parts of the body, there is usually a barrier that the majority of individuals have difficulty to overcome, which is its inability to communicate verbally and written. Thus, the main objective of this project was to create an alternative communication platform whose base settles in communication through graphic symbols, called pictographic boards.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, 2019
Lecture Notes in Computer Science, 2012
Computers Helping People with Special Needs, 2020
Universal Access in Human–Computer Interaction. Designing Novel Interactions, 2017
ACM Transactions on Computer-Human Interaction, 2013
2016 IEEE 16th International Conference on Advanced Learning Technologies (ICALT), 2016
Journal of Intelligent Learning Systems and Applications, 2015
2020 IEEE Frontiers in Education Conference (FIE), 2020
Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems
Journal on Multimodal …, 2008
2020 18th International Conference on Emerging eLearning Technologies and Applications (ICETA), 2020
Universal Access in Human-Computer Interaction. Design Methods, Tools, and Interaction Techniques for eInclusion, 2013