Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2010
…
4 pages
1 file
In many applications, the primary goal of a spatialized audio cue is to direct the user’s attention to the location of a visual object of interest in the environment. This type of auditory cueing is known to be very effective in environments that contain only a single visual target. However, little is known about the effectiveness of this technique in environments with more than one possible target location. In this experiment, participants were asked to identify the characteristics of a visual target presented in a field of visual distracters. In some conditions, a single auditory cue was provided. In other conditions, the auditory cue was accompanied by one or more audio distracters at different spatial locations. These conditions were compared to a control condition in which no audio cue was provided. The results show that listeners can extract spatial information from up to three simultaneous sound sources, but that their visual search performance is significantly degraded when ...
Hearing Research, 2008
The majority of research findings to date indicate that spatial cues play a minor role in enhancing listeners' ability to parse and detect a sound of interest when it is presented in a complex auditory scene comprising multiple simultaneous sounds. Frequency and temporal differences between sound streams provide more reliable cues for scene analysis as well as for directing attention to relevant auditory 'objects' in complex displays. The present study used naturalistic sounds with varying spectro-temporal profiles to examine whether spatial separation of sound sources can enhance target detection in an auditory search paradigm. The arrays of sounds were presented in virtual auditory space over headphones. The results of Experiment 1 suggest that target detection is enhanced when sound sources are spatially separated relative to when they are presented at the same location. Experiment 2 demonstrated that this effect is most prominent within the first 250 ms of exposure to the array of sounds. These findings suggest that spatial cues may be effective for enhancing early processes such as stream segregation, rather than simply directing attention to objects that have already been segmented.
Due to increased computational power, reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurallyaided visual search performance between a HRTF enhanced audio system (3D) and an amplitude panning audio system (panning) in a virtual environment. We present a performance study involving 33 participants locating aurally-aided visual targets placed at fixed positions, under different audio conditions. A varying amount of visual distractors were present, represented as black circles with white dots. The results indicate that 3D audio yields faster search latencies than panning audio, especially with larger amounts of distractors. The applications of this research could fit virtual environments such as video games or virtual simulations.
We investigated exogenous and endogenous orienting of visual attention to the spatial location of an auditory cue. In Experiment 1, significantly faster saccades were observed to visual targets appearing ipsilateral, compared to contralateral, to the peripherallypresented cue. This advantage was greatest in an 80% target-at-cue (TAC) condition but equivalent in 20% and 50% TAC conditions. In Experiment 2, participants maintained central fixation while making an elevation judgment of the peripheral visual target. Performance was significantly better for the cued side of the display, and this advantage was equivalent across the three expectancy conditions. Results point to attentional processes, rather than simply ipsilateral response preparation, and suggest that orienting visual attention to a sudden auditory stimulus is difficult to avoid.
Interaction between the listener and their environment in a spatial auditory display plays an important role in creating better situational awareness, resolving front/back and up/down confusions, and improving localization. Prior studies with 6DOF interaction suggest that using either a head tracker or a mouse-driven interface yields similar performance during a navigation and search task in a virtual auditory environment. In this paper, we present a study that compares listener performance in a virtual auditory environment under a static mode condition, and two dynamic conditions (head tracker and mouse) using orientation-only interaction. Results reveal tradeoffs among the conditions and interfaces. While the fastest response time was observed in the static mode, both dynamic conditions resulted in significantly reduced front/back confusions and improved localization accuracy. Training effects and search strategies are discussed.
Perception, 2001
Can auditory signals influence the processing of visual information? The present study examined the effects of simple auditory signals (clicks and noise bursts) whose onset was simultaneous with that of the visual target, but which provided no information about the target. It was found that such a signal enhances performance in the visual task: the accessory sound reduced response times for target identification with no cost to accuracy. The spatial location of the sound (whether central to the display or at the target location) did not modify this facilitation. Furthermore, the same pattern of facilitation was evident whether the observer fixated centrally or moved their eyes to the target. The results were not altered by changes in the contrast (and therefore visibility) of the visual stimulus or by the perceived utility of the spatial location of the sound. We speculate that the auditory signal may promote attentional ‘disengagement’ and that, as a result, observers are able to p...
Proceedings of the 22nd International Conference on Auditory Display - ICAD 2016, 2016
As visual display complexity grows, visual cues and alerts may become less salient and therefore less effective. Although the auditory system's resolution is rather coarse relative to the visual system, there is some evidence for virtual spatialized audio to benefit visual search on a small frontal region, such as a desktop monitor. Two experiments examined if search times could be reduced compared to visual-only search through spatial auditory cues rendered using one of two methods: individualized or generic head-related transfer functions. Results showed the cue type interacted with display complexity, with larger reductions compared to visual-only search as set size increased. For larger set sizes, individualized cues were significantly better than generic cues overall. Across all set sizes, individualized cues were better than generic cues for cueing eccentric elevations (>± 8 °). Where performance must be maximized, designers should use individualized virtual audio if at all possible, even in small frontal region within the field of view.
This paper proposes a method of evaluating the effect of auditory display techniques on a complex visual search task. The approach uses a pre-existing visual search task (conjunction search) to create a standardized model for audio, and non-audio assisted visual search tasks. A pre-existing auditory display technique is evaluated to test the system. Using randomly generated images, participants were asked to undertake a series of visual search tasks of set complexities, with and without audio. It was shown that using the auditory feedback improved the participant's visual search times considerably, with statistically significant results. Additionally, it was shown that there was a larger difference between audio and non-audio when the complexity of the images was increased. The same auditory display techniques were then applied to an example of a real complex visual search task, the results of which imply a significant improvement in visual search efficiency when using auditory feedback.
Human Factors and Ergonomics Society Annual Meeting Proceedings
Auditory localization is an increasingly important topic as the technology for audio displays is becoming more available. However, few studies examine the effects of multiple simultaneous distracters on auditory detection and localization performance. Previous research has found that detection and localization performance significantly drop as the number of distracters increases; however, it is not clear what causes these errors. In the present study, participants either had to localize an auditory stimulus or detect an auditory stimulus among multiple distracters. Similar to previous research the number of errors significantly increased as the number of active speakers increased. Also consistent with previous research, the detection performance was better than the localization performance. The use of visual cues was found to benefit the localization group but did not significantly affect performance in the detection group. The present study also found that a longer interstimulus interval improved accuracy only in the localization group, and then only when visual cues were present. These findings provide insight into the complex nature of the auditory search task.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Human Factors, 2004
Perception & Psychophysics, 2007
Journal of Vision, 2012
The Ergonomics Open Journal, 2012
Attention, Perception, & Psychophysics, 2015
Psychonomic Bulletin & Review, 2008
Lecture Notes in Computer Science, 2006
Attention, Perception, & Psychophysics, 2010
Acoustical Science and Technology, 2021
Journal of Experimental Psychology: Human Perception and Performance, 2008
Current Opinion in Neurobiology, 2007