Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
10 pages
1 file
Grounded cognition accounts of semantic representation posit that brain regions traditionally linked to perception and action play a role in grounding the semantic content of words and sentences. Sensory-motor systems are thought to support partially abstract simulations through which conceptual content is grounded. However, which details of sensory-motor experience are included in, or excluded from these simulations, is not well understood. We investigated whether sensory-motor brain regions are differentially involved depending on the speed of actions described in a sentence. We addressed this issue by examining the neural signature of relatively fast (The old lady scurried across the road) and slow (The old lady strolled across the road) action sentences. The results showed that sentences that implied fast motion modulated activity within the right posterior superior temporal sulcus and the angular and middle occipital gyri, areas associated with biological motion and action perception. Sentences that implied slow motion resulted in greater signal within the right primary motor cortex and anterior inferior parietal lobule, areas associated with action execution and planning. These results suggest that the speed of described motion influences representational content and modulates the nature of conceptual grounding. Fast motion events are represented more visually whereas motor regions play a greater role in representing conceptual content associated with slow motion.
Journal of Cognitive …, 2005
& Retrieval of conceptual information from action pictures causes greater activation than from object pictures bilaterally in human motion areas (MT/MST) and nearby temporal regions. By contrast, retrieval of conceptual information from action words causes greater activation in left middle and superior temporal gyri, anterior and dorsal to the MT/MST. We performed two fMRI experiments to replicate and extend these findings regarding action words. In the first experiment, subjects performed conceptual judgments of action and object words under conditions that stressed visual semantic information. Under these conditions, action words again activated posterior temporal regions close to, but not identical with, the MT/MST. In the second experiment, we included conceptual judgments of manipulable object words in addition to judgments of action and animal words. Both action and manipulable object judgments caused greater activity than animal judgments in the posterior middle temporal gyrus. Both of these experiments support the hypothesis that middle temporal gyrus activation is related to accessing conceptual information about motion attributes, rather than alternative accounts on the basis of lexical or grammatical factors. Furthermore, these experiments provide additional support for the notion of a concrete to abstract gradient of motion representations with the lateral occipito-temporal cortex, extending anterior and dorsal from the MT/MST towards the peri-sylvian cortex. &
Language and Linguistics Compass, 2012
A central issue in understanding how language links the mental and the real world is the nature of the mental representations entertained during language processing. Are these mental representations closely linked to the perceptual experiences from which they were formed or are they somewhat removed from them? This review addresses this question by examining studies that have investigated motion verbs and sentences using functional magnetic resonance imaging. These studies tested whether language processing elicits modality-specific brain regions responsive to motion perception. Although the results of these studies are not definite due to the different tasks and analysis techniques utilized, they so far suggest that modality-specific brain regions processing visual motion are not automatically or habitually engaged in language processing. The occasional engagement of visual areas in language processing appears to result from tasks requiring integration of visual and linguistic information or attention to motion-specific features such as direction. The evidence reviewed therefore suggests that although perceptual representations may be flexibly engaged as a function of tasks and contexts, language comprehension in the absence of visual contexts habitually engages experience-based representations of motion events that are one-step removed from visual experiences, even in situations in which imagery is encouraged.
Understanding verbs typically activates posterior temporal regions and, in some circumstances, motion perception area V5. However, the nature and role of this activation remains unclear: does language alone indeed activate V5? And are posterior temporal representations modality-specific motion representations, or supra-modal motion-independent event representations? Here, we address these issues by investigating human and object motion sentences compared to corresponding state descriptions. We adopted the blank screen paradigm, which is known to encourage visual imagery, and used a localizer to identify V5 and temporal structures responding to motion. Analyses in each individual brain suggested that language modulated activity in the posterior temporal lobe but not within V5 in most participants. Moreover, posterior temporal structures strongly responded to both motion sentences and human static sentences. These results suggest that descriptive language alone need not recruit V5 and instead engages more schematic event representations in temporal cortex encoding animacy and motion.
Frontiers in Psychology, 2010
Theories of embodied language comprehension propose that the neural systems used for perception, action, and emotion are also engaged during language comprehension. Consistent with these theories, behavioral studies have shown that the comprehension of language that describes motion is affected by simultaneously perceiving a moving stimulus (Kaschak et al., 2005). In two neuroimaging studies, we investigate whether comprehension of sentences describing moving objects activates brain areas known to support the visual perception of moving objects (i.e., area MT/V5). Our data indicate that MT/V5 is indeed selectively engaged by sentences describing objects in motion toward the comprehender compared to sentences describing visual scenes without motion. Moreover, these sentences activate areas along the cortical midline of the brain, known to be engaged when participants process self-referential information. The current data thus suggest that sentences describing situations with potential relevance to one's own actions activate both higher-order visual cortex as well brain areas involved in processing information about the self. The data have consequences for embodied theories of language comprehension: first, they show that perceptual brain areas support sentential-semantic processing. Second the data indicate that sensory-motor simulation of events described through language are susceptible to top-down modulation of factors such as relevance of the described situation to the self.
Cognition, 2005
Recently developed accounts of language comprehension propose that sentences are understood by constructing a perceptual simulation of the events being described. These simulations involve the re-activation of patterns of brain activation that were formed during the comprehender's interaction with the world. In two experiments we explored the specificity of the processing mechanisms required to construct simulations during language comprehension. Participants listened to (and made judgments on) sentences that described motion in a particular direction (e.g. "The car approached you"). They simultaneously viewed dynamic black-and-white stimuli that produced the perception of movement in the same direction as the action specified in the sentence (i.e. towards you) or in the opposite direction as the action specified in the sentence (i.e. away from you). Responses were faster to sentences presented concurrently with a visual stimulus depicting motion in the opposite direction as the action described in the sentence. This suggests that the processing mechanisms recruited to construct simulations during language comprehension are also used during visual perception, and that these mechanisms can be quite specific. q
■ Some studies have reported that understanding concrete action-related words and sentences elicits activations of motor areas in the brain. The present fMRI study goes one step further by testing whether this is also the case for comprehension of nonfactual statements. Three linguistic structures were used (factuals, counterfactuals, and negations), referring either to actions or, as a control condition, to visual events. The results showed that action sentences elicited stronger activations than visual sentences in the SMA, extending to the primary motor area, as well as in regions generally associated with the planning and understanding of actions (left superior temporal gyrus, left and right supramarginal gyri). Also, we found stronger activations for action sentences than for visual sentences in the extrastriate body area, a region involved in the visual processing of human body movements. These action-related effects occurred not only in factuals but also in negations and counterfactuals, suggesting that brain regions involved in action understanding and planning are activated by default even when the actions are described as hypothetical or as not happening. Moreover, some of these regions overlapped with those activated during the observation of action videos, indicating that the act of understanding action language and that of observing real actions share neural networks. These results support the claim that embodied representations of linguistic meaning are important even in abstract linguistic contexts. ■
2015
Recent studies have identified neural correlates of language effects on perception in static domains of experience such as colour and objects. The generalization of such effects to dynamic domains like motion events remains elusive. Here, we focus on grammatical differences between languages relevant for the description of motion events and their impact on visual scene perception. Two groups of native speakers of German or English were presented with animated videos featuring a dot travelling along a trajectory towards a geometrical shape (endpoint). English is a language with grammatical aspect in which attention is drawn to trajectory and endpoint of motion events equally. German, in contrast, is a non-aspect language which highlights endpoints. We tested the comparative perceptual saliency of trajectory and endpoint of motion events by presenting motion event animations (primes) followed by a picture symbolising the event (target): In 75% of trials, the animation was followed by a mismatching picture (both trajectory and endpoint were different); in 10% of trials, only the trajectory depicted in the picture matched the prime; in 10% of trials, only the endpoint matched the prime; and in 5% of trials both trajectory and endpoint were matching, which was the condition requiring a response from the participant. In Experiment 1 we recorded eventrelated brain potentials elicited by the picture in native speakers of German and native speakers of English. German participants exhibited a larger P3 wave in the endpoint match than the trajectory match condition, whereas English speakers showed no P3 amplitude difference between conditions. In Experiment 2 participants performed a behavioural motion matching task using the same stimuli as those used in Experiment 1. German and English participants did not differ in response times showing that motion event verbalisation cannot readily account for the difference in P3 amplitude found in the first experiment. We argue that, even in a non-verbal context, the grammatical properties of the native language and associated sentence-level patterns of event encoding influence motion event perception, such that attention is automatically drawn towards aspects highlighted by the grammar.
2010
■ Can linguistic semantics affect neural processing in featurespecific visual regions? Specifically, when we hear a sentence describing a situation that includes motion, do we engage neural processes that are part of the visual perception of motion? How about if a motion verb was used figuratively, not literally? We used fMRI to investigate whether semantic content can "penetrate" and modulate neural populations that are selective to specific visual properties during natural language comprehension. Participants were presented audiovisually with three kinds of sentences: motion sentences ("The wild horse crossed the barren field."), static sentences, ("The black horse stood in the barren field."), and fictive motion sentences ("The hiking trail crossed the barren field."). Motion-sensitive visual areas (MT+) were localized individually in each participant as well as faceselective visual regions (fusiform face area; FFA). MT+ was activated significantly more for motion sentences than the other sentence types. Fictive motion sentences also activated MT+ more than the static sentences. Importantly, no modulation of neural responses was found in FFA. Our findings suggest that the neural substrates of linguistic semantics include early visual areas specifically related to the represented semantics and that figurative uses of motion verbs also engage these neural systems, but to a lesser extent. These data are consistent with a view of language comprehension as an embodied process, with neural substrates as far reaching as early sensory brain areas that are specifically related to the represented semantics. ■
2010
Recent theories have hypothesized that semantic representations of action verbs and mental representations of action may be supported by partially overlapping, distributed brain networks. An fMRI experiment in healthy participants was designed to identify the common and specific regions in three different tasks from a common set of object drawings (manipulable man-made objects (MMO) and biological objects (MBO)): the generation of action words (GenA), the mental simulation of action (MSoA) and the mime of an action with the right hand (MimA). A fourth task, object naming (ON), was used as control for input/ output effects. A null conjunction identified a common neural network consisting of nine regions distributed over premotor, parietal and occipital cortices. Within this common network, GenA elicited significantly more activation than either ON or MSoA in the left inferior frontal region, while MSoA elicited significantly more activation than either ON or GenA in the left superior parietal lobule. Both MSoA and GenA activated the left inferior parietal lobule more than ON. Furthermore, the left superior parietal cortex was activated to a greater extent by MMO than by MBO regardless of the tasks. These results suggest that action-denoting verbs and motor representations of the same actions activate a common frontal-parietal network. The left inferior parietal cortex and the left superior parietal cortex are likely to be involved in the retrieval of spatial-temporal features of object manipulation; the former might relate to the grasping and manipulation of any object while the latter might be linked to specific object-related gestures.
Perception does not function as an isolated module but is tightly linked with other cognitive functions. Several studies have demonstrated an influence of language on motion perception, but it remains debated at which level of processing this modulation takes place. Some studies argue for an interaction in perceptual areas, but it is also possible that the interaction is mediated by “language areas” that integrate linguistic and visual information. Here, we investigated whether language–perception interactions were specific to the language-dominant left hemisphere by comparing the effects of language on visual material presented in the right (RVF) and left visual fields (LVF). Furthermore, we determined the neural locus of the interaction using fMRI. Participants performed a visual motion detection task. On each trial, the visual motion stimulus was presented in either the LVF or in the RVF, preceded by a centrally presented motion word (e.g., “rise”). The motion word could be congruent, incongruent, or neutral with regard to the direction of the visual motion stimulus that was presented subsequently. Participants were faster and more accurate when the direction implied by the motion word was congruent with the direction of the visual motion stimulus. Interestingly, the speed benefit was present only for motion stimuli that were presented in the RVF. We observed a neural counterpart of the behavioral facilitation effects in the left middle temporal gyrus, an area involved in semantic processing of verbal material. Together, our results suggest that semantic information about motion retrieved in language regions may automatically modulate perceptual decisions about motion.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Neuropsychologia, 2012
Cerebral Cortex, 2010
Frontiers in Human Neuroscience, 2012
Journal of Physiology-Paris, 2008
Journal of Experimental Psychology: Learning, Memory, and Cognition
Scientific reports, 2018
Advances in Cognitive Psychology, 2019
Frontiers in Psychology, 2013
Psychological Science, 2007
Journal of Cognitive Neuroscience, 2008
Brain and …, 2011