Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
27 pages
1 file
Studies on affordances typically focus on single objects. We investigated whether affordances are modulated by the context, defined by the relation between 2 objects and a hand. Participants were presented with pictures displaying 2 manipulable objects linked by a functional (knifebutter), a spatial (knife-coffee mug), or by no relation. They responded by pressing a key whether the objects were related or not. To determine if observing other's actions and understanding their goals would facilitate judgments, a hand was: a. displayed near the objects; b. grasping an object to use it; c. grasping an object to manipulate/move it; d. no hand was displayed. RTs were faster when objects were functionally rather than spatially related. Manipulation postures were the slowest in the functional context and functional postures were inhibited in the spatial context, probably due to mismatch between the inferred goal and the context. The absence of this interaction with foot responses instead of hands in Experiment 2 suggests that effects are due to motor simulation rather than to associations between context and hand-postures.
palm.mindmodeling.org
Recently, researchers have suggested that when we see an object we automatically represent how that object affords action . However, the precise nature of this representation remains unclear: is it a specific motor plan or a more abstract response code? Furthermore, do action representations actually influence what we perceive? In Experiment 1, participants responded to an image of an object and then made a laterality judgment about an image of a hand. Hand identification was fastest when the hand corresponded to both the orientation and grasp type of the object, suggesting that affordances are represented as specific action plans. In Experiment 2, participants saw an image of a hand before interpreting an ambiguous object drawing. Responses were biased towards the interpretation that was congruent with the grasp type of the hand prime. Together, these results suggest that action representations play a critical role in object perception.
We investigate, using language, which motor information is automatically activated by observing 3D objects, i.e. manipulation vs. function, and whether this information is modulated by the objects' location in space. Participants were shown 3D pictures of objects located in peripersonal vs. extrapersonal space. Immediately after they were presented with function, manipulation or observation verbs (e.g., "to-drink", "to-grasp", "to-look at") and were required to judge if the verb was compatible with the presented object.
The perception of tool-object pairs involves understanding their action-relationships (affordances). Here, we sought to evaluate how an observer visually encodes tool-object affordances. To this end, eye-movements were recorded as right-handed participants freely viewed static, right-handed, egocentric tool-object images across 3 contexts: correct (hammer-nail), incorrect (hammer-paper), spatial/ambiguous (hammer-wood), and 3 grasp-types: no hand, functional grasp-posture (grasp hammer-handle), non-functional/manipulative grasp-posture (grasp hammer-head). There were three Areas of Interests (AOI): the object (nail), the operant tool-end (hammer-head), the graspable tool-end (hammer-handle). Participants passively evaluated whether tool-object pairs were functionally correct/incorrect. Clustering of gaze scanpaths and AOI weightings grouped conditions into three distinct grasp-specific clusters, especially across correct and spatial tool-object contexts and to a lesser extent within the incorrect tool-object context. Permutation tests revealed that the grasp-specific gaze scanpath clusters were reasonably robust to the temporal order of gaze scanpaths. Gaze was therefore automatically primed to grasp-affordances though the task required evaluating tool-object context. Participants also primarily focused on the object and the operant tool-end and sparsely attended to the graspable tool-end, even in images with functional grasp-postures. In fact, in the absence of a grasp, the object was foveally weighted the most, indicative of a possible object-oriented action priming effect wherein the observer may be evaluating how the tool engages on the object. Unlike the functional grasp-posture, the manipulative grasp-posture was a gaze attractor and caused the greatest disruption in the object-oriented priming effect, ostensibly as it does not afford tool-object action due to its non-functional interaction with the operant tool-end that actually engages with the object (e.g. hammer-head to nail). The enhanced attention may therefore serve to encode the intent of the manipulative grasp-posture. Results here show how contextual and grasp-specific affordances directly modulate how an observer gathers action-information when evaluating static tool-object scenes.
Frontiers in Human Neuroscience, 2015
The mere observation of pictures or words referring to manipulable objects is sufficient to evoke their affordances since objects and their nouns elicit components of appropriate motor programs associated with object interaction. While nobody doubts that objects actually evoke motor information, the degree of automaticity of this activation has been recently disputed. Recent evidence has indeed revealed that affordances activation is flexibly modulated by the task and by the physical and social context. It is therefore crucial to understand whether these results challenge previous evidence showing that motor information is activated independently from the task. The context and the task can indeed act as an early or late filter. We will review recent data consistent with the notion that objects automatically elicit multiple affordances and that top-down processes select among them probably inhibiting motor information that is not consistent with behavior goals. We will therefore argue that automaticity and flexibility of affordances are not in conflict. We will also discuss how language can incorporate affordances showing similarities, but also differences, between the motor information elicited by vision and language. Finally we will show how the distinction between stable and variable affordances can accommodate all these effects.
Psychonomic bulletin & …, 2011
Experimental brain …, 2010
Stimulus position is coded even if it is task-irrelevant, leading to faster response times when the stimulus and the response locations are compatible (spatial Stimulus–Response Compatibility–spatial SRC). Faster responses are also found when the handle of a visual object and the response hand are located on the same side; this is known as affordance effect (AE). Two contrasting accounts for AE have been classically proposed. One is focused on the recruitment of appropriate grasping actions on the object handle, and the other on the asymmetry in the object shape, which in turn would cause a handle-hand correspondence effect (CE). In order to disentangle these two accounts, we investigated the possible transfer of practice in a spatial SRC task executed with a S–R incompatible mapping to a subsequent affordance task in which objects with either their intact handle or a broken one were used. The idea was that using objects with broken handles should prevent the recruitment of motor information relative to object grasping, whereas practice transfer should prevent object asymmetry in driving handle-hand CE. A total of three experiments were carried out. In Experiment 1 participants underwent an affordance task in which common graspable objects with their intact or broken handle were used. In Experiments 2 and 3, the affordance task was preceded by a spatial SRC task in which an incompatible S–R mapping was used. Inter-task delays of 5 or 30 min were employed to assess the duration of transfer effect. In Experiment 2 objects with their intact handle were presented, whereas in Experiment 3 the same objects had their handle broken. Although objects with intact and broken handles elicited a handle-hand CE in Experiment 1, practice transfer from an incompatible spatial SRC to the affordance task was found in Experiment 3 (broken-handle objects), but not in Experiment 2 (intact-handle objects). Overall, this pattern of results indicate that both object asymmetry and the activation of motor information contribute to the generation of the handle-hand CE effect, and that the handle AE cannot be reduced to a SRC effect.
Journal of Experimental Psychology: Human Perception and Performance, 2017
Correspondence effects based on the relationship between the left/right position of a pictured object's handle and the hand used to make a response, or on the size of the object and the nature of a grip response (power/precision), have been attributed to motor affordances evoked by the object. Effects of this nature, however, are readily explained by the similarity in the abstract spatial coding of the features that define the stimulus and response, without recourse to object-based affordances. We propose that in the task context of making reach-and-grasp actions, pictured objects may evoke genuine, limb-specific action constituents. We demonstrate that when subjects make reach-and-grasp responses, there is a qualitative difference in the time course of correspondence effects induced by pictures of objects versus the names of those objects. For word primes, this time course was consistent with the abstract spatial coding account, in which effects should emerge slowly and become apparent only among longer response times. In contrast, correspondence effects attributable to object primes were apparent even among the shortest response times and were invariant across the entire response-time distribution. Using rotated versions of object primes provided evidence for a short-lived competition between canonical and depicted orientations of an object with respect to eliciting components of associated actions. These results suggest that under task conditions requiring reach-and-grasp responses, pictured objects rapidly trigger constituents of real-world actions. Public Significance Statement Our experiments examine the relationship between the perception of objects and knowledge about their associated actions. We propose that when observers have an intention to make some form of hand action, such as grasping a handled object, they are likely to activate knowledge about the motor actions typically applied to objects they perceive. We demonstrate separate influences of action representations typically associated with an object, and actions that are invited by the object's perceived orientation (e.g., an object may be lying on its side). We show that both types of action knowledge may be available simultaneously and may compete when the observer is cued to make one specific action. Aspects of the relationship between object perception and action may provide clues to understanding neurological disorders in which patients are unable to identify objects or are impaired when trying to properly interact with them using manual actions.
Experimental Brain Research, 2007
The present study aimed to demonstrate that motor planning processes are affected by ignored affordances of a main body of an object. Participants were asked to select the hand of response according to the property of the local component (a stalk) of the object (a fruit) while they were holding a precision or a power grip devices. The size of a main body of an object was observed to prime hand selection processes asymmetrically. Right-hand responses were facilitated when the stalk was a part of a precision grip object (e.g. a strawberry) or displayed alone. In contrast, left-hand responses were facilitated when the stalk was a part of a power grip object (e.g. an apple). This data supported our previously presented view that the different hemispheres have differential roles in the early planning of manual actions. The object information that is relevant to precision grip planning appears to be processed predominantly in the left-hemisphere whereas the information that is relevant to power grip planning appears to be processed predominantly in the right-hemisphere. In Experiment 3, the irrelevant fruit body had a slight effect on motor planning even though the stalk was spatially separated from the fruit body. The priming effect was entirely eliminated when, in addition to the spatial separation, the stalk was semantically disassociated from the fruit body (Experiment 4), and when the objects used in Experiment 1 were replaced by two dimensional abstract objects (Experiment 2). Experiments 2, 3 and 4 suggested that affordances of an irrelevant main body of an object influences motor planning processes only when the local target component of the object is analysed as a meaningful part of a graspable object.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Attention, perception & psychophysics, 2021
Brain and cognition, 2011
Social Cognitive and …, 2012
The Journal of Experimental Psychology: Human Perception and Performance, 2020
Psychonomic Bulletin & Review, 2006
Quarterly Journal of Experimental Psychology, 2011