Papers by Marcia Grabowecky

Attention, Perception, & Psychophysics, 2014
Research has shown that information accessed from one sensory modality can influence perceptual a... more Research has shown that information accessed from one sensory modality can influence perceptual and attentional processes in another modality. Here, we demonstrated a novel crossmodal influence of haptic-shape information on visual attention. Participants visually searched for a target object (e.g., an orange) presented among distractor objects, fixating the target as quickly as possible. While searching for the target, participants held (never viewed and out of sight) an item of a specific shape in their hands. In two experiments, we demonstrated that the time for the eyes to reach a target-a measure of overt visual attention-was reduced when the shape of the held item (e.g., a sphere) was consistent with the shape of the visual target (e.g., an orange), relative to when the held shape was unrelated to the target (e.g., a hockey puck) or when no shape was held. This haptic-to-visual facilitation occurred despite the fact that the held shapes were not predictive of the visual targets' shapes, suggesting that the crossmodal influence occurred automatically, reflecting shape-specific haptic guidance of overt visual attention. Keywords Multisensory processing. Visual search. Attention Visual attention is known to be influenced by signals from other sensory modalities. In the domain of spatial attention, auditory, tactile, and visual cues produce spatially specific attentional facilitation or inhibition of visual processing (e.g., Driver & Spence, 1998). Converging evidence from patients with spatial attention deficits, such as neglect or extinction, has shown that auditory or tactile signals presented in one hemifield compete with visual signals in the opposite hemifield for drawing visual attention, as occurs with competing bilateral visual signals (e.g., Brozzoli, Demattè, Pavani, Frassinetti, & Farnè, 2006). In addition to crossmodal spatial interactions, feature-and object-based crossmodal influences of audition on visual attention have been reported (Guzman

Psychonomic Bulletin & Review, 2014
Voices provide a rich source of information that is important for identifying individuals and for... more Voices provide a rich source of information that is important for identifying individuals and for social interaction. During search for a face in a crowd, voices often accompany visual information and they facilitate localization of the sought individual. However, it is unclear whether this facilitation occurs primarily because the voice cues the location of the face or because it also increases the salience of the associated face. Here we demonstrate that a voice that provides no location information nonetheless facilitates visual search for an associated face. We trained novel face/voice associations and verified learning using a two-alternative forced-choice task in which participants had to correctly match a presented voice to the associated face. Following training, participants searched for a previously learned target face among other faces while hearing one of the following sounds (localized at the center of the display): a congruent-learned voice, an incongruent but familiar voice, an unlearned and unfamiliar voice, or a time-reversed voice. Only the congruent-learned voice speeded visual search for the associated face. This result suggests that voices facilitate visual detection of associated faces, potentially by increasing their visual salience, and that the underlying crossmodal associations can be established through brief training.

Advances in neurology, 1995
Without a functioning dorsolateral prefrontal cortex, humans are stimulus bound and have little c... more Without a functioning dorsolateral prefrontal cortex, humans are stimulus bound and have little confidence in their ability to interact with the environment. Deficits in inhibitory control of external and internal processes coupled with impaired temporal coding of stimuli and detection capacity for novel events leave the patient functioning in a noisy internal environment without critical spatiotemporal cues. Some of these proposals are similar to those of Nauta (104). Based on connectivity of the prefrontal cortex, Nauta suggested that this region was ideally suited to generate and evaluate internal models of action. It is proposed that, in addition to this generation function, the prefrontal cortex is crucial for detecting changes in the external environment and for discriminating internally and externally derived models of the world. This chapter has described a cascade of deficits that result from damage to the dorsolateral prefrontal cortex. Awareness of the sensory world, and ...

Brain and Cognition, 2004
Data from experiments with split-brain patients, who have had their left and right hemispheres di... more Data from experiments with split-brain patients, who have had their left and right hemispheres disconnected, suggests a remarkable specialization of function within each hemisphere. At the same time, these patients conduct their daily lives with great proficiency. This ability suggests that some information integral to coordinated function between the hemispheres is available in the absence of the corpus callosum. Is information about the semantics of words one type of information that is shared? An experiment by Lambert (1991) suggests that it may be. Lambert reported that living/nonliving word categorization was delayed when disconnected hemispheres processed words belonging to the same category. Although other interpretations are plausible, Lambert described this effect as having a semantic source. We attempted to replicate the original effect with two additional split-brain patients, J.W. and V.P., and to extend the original design to clarify the source of the putative semantic effect. Our results indicate that any semantic interaction between the split hemispheres is not reliable. As such our study adds to the growing literature indicating that subcortical transfer of semantic information is more illusory than real.

Spatially heterogeneous flicker, characterized by probabilistic and locally independent luminance... more Spatially heterogeneous flicker, characterized by probabilistic and locally independent luminance modulations, abounds in nature. It is generated by flames, water surfaces, rustling leaves, and so on, and it is pleasant to the senses. It affords spatiotemporal multistability that allows sensory activation conforming to the biases of the visual system, thereby generating the perception of spontaneous motion and likely facilitating the calibration of motion detectors. One may thus hypothesize that spatially heterogeneous flicker might potentially provide restoring stimuli to the visual system that engage fluent (requiring minimal top-down control) and self-calibrating processes. Here, we present some converging behavioral and electrophysiological evidence consistent with this idea. Spatially heterogeneous (multistable) flicker (relative to controls matched in temporal statistics) reduced posterior EEG (electroencephalography) beta power implicated in long-range neural interactions tha...

Attention, perception & psychophysics, Jan 20, 2017
Multisensory integration can play a critical role in producing unified and reliable perceptual ex... more Multisensory integration can play a critical role in producing unified and reliable perceptual experience. When sensory information in one modality is degraded or ambiguous, information from other senses can crossmodally resolve perceptual ambiguities. Prior research suggests that auditory information can disambiguate the contents of visual awareness by facilitating perception of intermodally consistent stimuli. However, it is unclear whether these effects are truly due to crossmodal facilitation or are mediated by voluntary selective attention to audiovisually congruent stimuli. Here, we demonstrate that sounds can bias competition in binocular rivalry toward audiovisually congruent percepts, even when participants have no recognition of the congruency. When speech sounds were presented in synchrony with speech-like deformations of rivalling ellipses, ellipses with crossmodally congruent deformations were perceptually dominant over those with incongruent deformations. This effect w...

Journal of Cognitive Neuroscience, 2017
The perceptual system integrates synchronized auditory-visual signals in part to promote individu... more The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.

Cognitive Brain Research, 1999
At a glance, one can often determine whether a face belongs to a known individual. To investigate... more At a glance, one can often determine whether a face belongs to a known individual. To investigate brain mechanisms underlying this memory feat, we recorded EEG signals time-locked to face presentations. In the study phase, 40 unknown faces were presented, 20 of which were accompanied by a voice simulating that person speaking. Instructions were to remember the faces with spoken biographical Ž. Ž. information R-faces and to forget the others F-faces. In the test phase, famous and non-famous faces were presented in a visually degraded manner. Subjects made two-choice fame judgments and priming was observed in the form of faster and more accurate responses for old than for new non-famous faces. Priming did not differ between R-faces and F-faces. In a second experiment, faces were not degraded at test and behavioral responses were made only when faces were presented twice in immediate succession. Brain potentials elicited 300 to 900 ms after stimulus onset from frontal and parieto-occipital scalp regions were larger for R-faces than for F-faces. Recognition tested later was more accurate for R-faces than for F-faces. Because the study-phase manipulation influenced recognition but not priming, we conclude that this procedure succeeded in isolating neural correlates of recollective processing from more automatic uses of face memory as indexed by priming.
Neuron, 2002
nonlinear inhibitory interactions between channels that 2029 Sheridan Road respond to the two com... more nonlinear inhibitory interactions between channels that 2029 Sheridan Road respond to the two competing percepts, with random Evanston, Illinois 60208 neural noise (either in the rivaling inputs or in the inhibitory interactions) generating the stochastic properties (e.g.,

Vision Research, 2002
When two overlapping displays alternate rapidly, it is difficult to resolve the temporal coincide... more When two overlapping displays alternate rapidly, it is difficult to resolve the temporal coincidence of objects, parts, or features. However, under certain conditions (at least for luminance-based stimuli) rapid temporal coincidence can be detected on the basis of stable emergent percepts in which parts that oscillate in phase appear more strongly grouped than (or appear distinct from) parts that oscillate out of phase. These emergent percepts appear as depth segregation, enhanced slow orientation rivalry, and oriented shimmer (a new phenomenon that cannot be explained in terms of conventional apparent motion or temporal contrast illusions). These percepts resulted in up to an eightfold decrease in the coincidence detection threshold (alternations as fast as 20 ms/frame or 25 Hz) relative to control conditions that did not yield them; these sensitivity enhancements are unlikely to be due to temporal probability summation. The results provide psychophysical evidence that temporal-phase information can contribute to the parsing of overlapping patterns.

Vision Research, 2005
Extensive research on local color aftereffects has revealed perceptual consequences of opponent c... more Extensive research on local color aftereffects has revealed perceptual consequences of opponent color coding in the retina and the LGN, and of orientation-and/or spatial-frequency-contingent color coding in early cortical visual areas (e.g., V1 and V2). Here, we report a color aftereffect that depends crucially on global-form-contingent color processing. Brief viewing of colored items (passively viewed, ignored, or attended) reduced the salience of the previewed color in a subsequent task of color-based visual search. This color-salience aftereffect was relatively insensitive to variations (between color preview and search) in local image features, but was substantially affected by changes in global configuration (e.g. the presence or absence of perceptual unitization); the global-form dependence of the aftereffect was also modulated by task demands. The overall results suggest that (1) color salience is adaptively modulated (from fixation to fixation), drawing attention to a new color in visual-search contexts, and (2) these modulations seem to be mediated by global-form-and-color-selective neural processing in mid to late stages of the ventral visual pathway (e.g., V4 and IT), in combination with task-dependent feedback from higher cortical areas (e.g., prefrontal cortex).

Trends in Neurosciences, 2003
broader role in many types of neuron [12] and does not function uniquely in the adult brain as a ... more broader role in many types of neuron [12] and does not function uniquely in the adult brain as a regulator of learning and memory. The situation is further complicated by the fact that p190 RhoGAP is very closely related to the protein p190-B RhoGAP, which is also widely expressed in brain [24] and the disruption of which in mice is associated with neural defects [25]. The degree of functional redundancy between these two RhoGAPs has not yet been determined. Interestingly, mice lacking p190-B RhoGAP exhibit a nearly complete loss of phosphorylation of the cAMP-response-element-binding (CREB) transcription factor in brain [25], and CREB mutant mice reportedly exhibit defects in several aspects of fear conditioning [26]. Thus, it is possible that both p190 RhoGAPs participate in fear conditioning through distinct regulatory mechanisms. Overall, it is becoming increasingly clear that the response to fear is, at the molecular level, complex, involving numerous signaling proteins that undoubtedly perform multiple functions in both the developing and the mature nervous systems. With hindsight, it could be argued that the identification of the amygdala as a crucial brain region that mediates fear conditioning was relatively easy, and that the hard part has just begun.
Psychonomic Bulletin & Review, 2008

Psychonomic Bulletin & Review, 2012
Laughter is an auditory stimulus that powerfully conveys positive emotion. We investigated how la... more Laughter is an auditory stimulus that powerfully conveys positive emotion. We investigated how laughter influenced the visual perception of facial expressions. We presented a sound clip of laughter simultaneously with a happy, a neutral, or a sad schematic face. The emotional face was briefly presented either alone or among a crowd of neutral faces. We used a matching method to determine how laughter influenced the perceived intensity of the happy, neutral, and sad expressions. For a single face, laughter increased the perceived intensity of a happy expression. Surprisingly, for a crowd of faces, laughter produced an opposite effect, increasing the perceived intensity of a sad expression in a crowd. A follow-up experiment revealed that this contrast effect may have occurred because laughter made the neutral distractor faces appear slightly happy, thereby making the deviant sad expression stand out in contrast. A control experiment ruled out semantic mediation of the laughter effects. Our demonstration of the strong context dependence of laughter effects on facial expression perception encourages a reexamination of the previously demonstrated effects of prosody, speech content, and mood on face perception, as they may be similarly context dependent.

Psychological Science, 2011
Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-le... more Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closedcurvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding.

Journal of Vision, 2009
High-level visual neurons in the ventral stream typically have large receptive fields, supporting... more High-level visual neurons in the ventral stream typically have large receptive fields, supporting position-invariant object recognition but entailing poor spatial resolution. Consequently, when multiple objects fall within their large receptive fields, unless selective attention is deployed, their responses are averages of responses to the individual objects. We investigated a behavioral consequence of this neural averaging in the perception of facial expressions. Two faces (7-apart) were briefly presented (100-ms, backward-masked) either within the same visual hemifield (within-hemifield condition) or in different hemifields (between-hemifield condition). Face pairs included happy, angry, and valence-neutral faces, and observers rated the emotional valence of a post-cued face. Perceptual averaging of facial expressions was predicted only for the within-hemifield condition because the receptive fields of 'face-tuned' neurons are primarily confined within the contralateral field; the between-hemifield condition served to control for post-perceptual effects. Consistent with averaging, valence-neutral faces appeared more positive when paired with a happy face than when paired with an angry face, and affective intensities of happy and angry faces were reduced by accompanying valence-neutral or opposite-valence faces, in the within-hemifield relative to the between-hemifield condition. We thus demonstrated within-hemifield perceptual averaging of a complex feature as predicted by neural averaging in the ventral visual stream.

Journal of Vision, 2011
Although local interactions involving orientation and spatial frequency are well understood, less... more Although local interactions involving orientation and spatial frequency are well understood, less is known about spatial interactions involving higher level pattern features. We examined interactive coding of aspect ratio, a prevalent twodimensional feature. We measured perception of two simultaneously flashed ellipses by randomly post-cueing one of them and having observers indicate its aspect ratio. Aspect ratios interacted in two ways. One manifested as an aspect-ratiorepulsion effect. For example, when a slightly tall ellipse and a taller ellipse were simultaneously flashed, the less tall ellipse appeared flatter and the taller ellipse appeared even taller. This repulsive interaction was long range, occurring even when the ellipses were presented in different visual hemifields. The other interaction manifested as a global assimilation effect. An ellipse appeared taller when it was a part of a global vertical organization than when it was a part of a global horizontal organization. The repulsion and assimilation effects temporally dissociated as the former slightly strengthened, and the latter disappeared when the ellipse-to-mask stimulus onset asynchrony was increased from 40 to 140 ms. These results are consistent with the idea that shape perception emerges from rapid lateral and hierarchical neural interactions.

Journal of Vision, 2010
When two targets are associated with the same response in a speeded task, the response time is fa... more When two targets are associated with the same response in a speeded task, the response time is facilitated when both targets are simultaneously presented compared to when only one target is presented. This redundant-signal effect can be mediated by probability summation (race model) or by signal integration (co-activation) over and above probability summation. Here we report that the redundant-signal effect depends strongly in the way in which attention is engaged in the task. We manipulated attention using exogenous cueing (a flashed rectangle with no predictable value), both endogenous and exogenous cueing (a flashed rectangle with predictable value) and pure endogenous attention using a symbolic central cue (with predictable value). The redundant-signal effect was strongly dependant in endogenous attention it was absent when the redundant targets were presented in the un-cued region in those tasks that engaged endogenous attention. The redundant redundant-signal effect occurred in both cued and un-cued regions in the task that engaged just exogenous attention. Then the strategically distribution of attention is thus crucial for a redundant-signal effect even for probability summation.

Journal of Vision, 2005
We previously demonstrated long-term speeding of binocular rivalry (VSS 2004). We further investi... more We previously demonstrated long-term speeding of binocular rivalry (VSS 2004). We further investigated how the rate of perceptual alternations changed with both short-and long-term experience. During each 20 s trial, Os reported perceptual alternations between "+" and "x" shapes. When Os viewed the rivalry stimuli for the first time, alternations were slow, but they rapidly speeded, stabilizing after only 3-5 trials. Following this rapid initial speeding, alternation rates remained stable (at least across 40 trials) within a day, but gradually speeded across days, reaching an asymptote in 15-30 days. The initial rapid speeding transferred across visual hemifields, but was asymmetric; initial experience in the RVF produced slower asymptotic alternations in the LVF. In contrast, the long-term speeding was specific in position, orientation, luminance polarity, and eye of origin. To begin to identify the source of this speeding, we presented Os with subsets of the rivalry experience. To determine whether experience of rivalry is critical, we had Os experience pattern alternations (e.g., + in the left eye alternating with x in the right eye) which simulated the dynamics of actual rivalry. For long-term speeding, actual and simulated rivalry produced similarly stimulus-specific speeding, suggesting that the speeding was due to modifications of post-rivalry processes. Interestingly, the initial speeding appears to require experience of rivalry. Additional experiments determined contributions of experiencing: (1) transitions without speeding, (2) binocular stimulus transitions, or (3)stimuli without transitions. Overall, these results suggest that the rate of binocular rivalry is determined by at least two separate processes, (1) processing that resolves binocular conflict that is fast adapting, stimulus general, and hemisphere asymmetric, and (2) postrivalry pattern processing which is slow adapting (potentially sleep dependent), stimulus specific, and hemisphere symmetric.
Uploads
Papers by Marcia Grabowecky