Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2017, Psychological science
…
8 pages
1 file
Observers experience affordance-specific biases in visual processing for objects within the hands' grasping space, but the mechanism that tunes visual cognition to facilitate action remains unknown. I investigated the hypothesis that altered vision near the hands is a result of experience-driven plasticity. Participants performed motion-detection and form-perception tasks-while their hands were either near the display, in atypical grasping postures, or positioned in their laps-both before and after learning novel grasp affordances. Participants showed enhanced temporal sensitivity for stimuli viewed near the backs of the hands after training to execute a power grasp using the backs of their hands (Experiment 1), but showed enhanced spatial sensitivity for stimuli viewed near the tips of their little fingers after training to use their little fingers to execute a precision grasp (Experiment 2). These results show that visual biases near the hands are plastic, facilitating process...
Brain and cognition, 2011
How do humans interact with tools? Gibson (1979) suggested that humans perceive directly what tools afford in terms of meaningful actions. This “affordances” hypothesis implies that visual objects can potentiate motor responses even in the absence of an intention to act. Here we explore the temporal evolution of motor plans afforded by common objects. We presented objects that have a strong significance for action (pinching and grasping) and objects with no such significance. Two experimental tasks involved participants viewing objects presented on a computer screen. For the first task, they were instructed to respond rapidly to changes in background colour by using an apparatus mimicking precision and power grip responses. For the second task, they received stimulation of their primary motor cortex using transcranial magnetic stimulation (TMS) while passively viewing the objects. Muscular responses (motor evoked potentials: MEPs) were recorded from two intrinsic hand muscles (associated with either a precision or power grip). The data showed an interaction between type of response (or muscle) and type of object, with both reaction time and MEP measures implying the generation of a congruent motor plan in the period immediately after object presentation. The results provide further support for the notion that the physical properties of objects automatically activate specific motor codes, but also demonstrate that this influence is rapid and relatively short lived.► How do objects automatically activate specific motor plans known as “affordances”? ► Task-irrelevant pictures shown to activate congruent grip actions. ► Affordance effect evident in both RTs and motor evoked potentials. ► Affordance effect arises rapidly and also dissipates quickly. ► Affordance effect evident for separate hand actions generated in the same hemisphere.
Frontiers in Psychology, 2013
Changes in visual processing near the hands may assist observers in evaluating items that are candidates for actions. If altered vision near the hands reflects adaptations linked to effective action production, then positioning the hands for different types of actions could lead to different visual biases. I examined the influence of hand posture on attentional prioritization to test this hypothesis. Participants placed one of their hands on a visual display and detected targets appearing either near or far from the hand. Replicating previous findings, detection near the hand was facilitated when participants positioned their hand on the display in a standard open palm posture affording a power grasp (Experiments 1 and 3). However, when participants instead positioned their hand in a pincer grasp posture with the thumb and forefinger resting on the display, they were no faster to detect targets appearing near their hand than targets appearing away from their hand (Experiments 2 and 3). These results demonstrate that changes in visual processing near the hands rely on the hands' posture. Although hands positioned to afford power grasps facilitate rapid onset detection, a pincer grasp posture that affords more precise action does not.
Quarterly Journal of Experimental Psychology, 2011
Viewing objects can result in automatic, partial activation of motor plans associated with them—“object affordance”. Here, we recorded grip force simultaneously from both hands in an object affordance task to investigate the effects of conflict between coactivated responses. Participants classified pictures of objects by squeezing force transducers with their left or right hand. Responses were faster on trials where the object afforded an action with the same hand that was required to make the response (congruent trials) compared to the opposite hand (incongruent trials). In addition, conflict between coactivated responses was reduced if it was experienced on the preceding trial, just like Gratton adaptation effects reported in “conflict” tasks (e.g., Eriksen flanker). This finding suggests that object affordance demonstrates conflict effects similar to those shown in other stimulus–response mapping tasks and thus could be integrated into the wider conceptual framework on overlearnt stimulus–response associations. Corrected erroneous responses occurred more frequently when there was conflict between the afforded response and the response required by the task, providing direct evidence that viewing an object activates motor plans appropriate for interacting with that object. Recording continuous grip force, as here, provides a sensitive way to measure coactivated responses in affordance tasks.
palm.mindmodeling.org
Recently, researchers have suggested that when we see an object we automatically represent how that object affords action . However, the precise nature of this representation remains unclear: is it a specific motor plan or a more abstract response code? Furthermore, do action representations actually influence what we perceive? In Experiment 1, participants responded to an image of an object and then made a laterality judgment about an image of a hand. Hand identification was fastest when the hand corresponded to both the orientation and grasp type of the object, suggesting that affordances are represented as specific action plans. In Experiment 2, participants saw an image of a hand before interpreting an ambiguous object drawing. Responses were biased towards the interpretation that was congruent with the grasp type of the hand prime. Together, these results suggest that action representations play a critical role in object perception.
PLOS ONE, 2016
We use visual information to guide our grasping movements. When grasping an object with a precision grip, the two digits need to reach two different positions more or less simultaneously, but the eyes can only be directed to one position at a time. Several studies that have examined eye movements in grasping have found that people tend to direct their gaze near where their index finger will contact the object. Here we aimed at better understanding why people do so by asking participants to lift an object off a horizontal surface. They were to grasp the object with a precision grip while movements of their hand, eye and head were recorded. We confirmed that people tend to look closer to positions that a digit needs to reach more accurately. Moreover, we show that where they look as they reach for the object depends on where they were looking before, presumably because they try to minimize the time during which the eyes are moving so fast that no new visual information is acquired. Most importantly, we confirmed that people have a bias to direct gaze towards the index finger's contact point rather than towards that of the thumb. In our study, this cannot be explained by the index finger contacting the object before the thumb. Instead, it appears to be because the index finger moves to a position that is hidden behind the object that is grasped, probably making this the place at which one is most likely to encounter unexpected problems that would benefit from visual guidance. However, this cannot explain the bias that was found in previous studies, where neither contact point was hidden, so it cannot be the only explanation for the bias.
PsycEXTRA Dataset, 2014
To investigate the role of action representations in the identification of upright and rotated objects, we examined the time course of their evocation. Across five experiments, subjects made vertically or horizontally oriented reach and grasp actions primed by images of handled objects that were depicted in upright or rotated orientations, at various Stimulus Onset Asynchronies:-250 ms (action cue preceded the prime), 0 ms, and +250 ms. Congruency effects between action and object orientation were driven by the object's canonical (upright) orientation at the 0 ms SOA, but by its depicted orientation at the +250 ms SOA. Alignment effects between response hand and the object's handle appeared only at the +250 ms SOA, and were driven by the depicted orientation. Surprisingly, an attempt to replicate this finding with improved stimuli (Experiment 3) did not show significant congruency effects at the 0 ms SOA; a further examination of the 0 ms SOA in Experiments 4 and 5 also failed to reach significance. However, a meta-analysis of the latter three experiments showed evidence for the congruency effect, suggesting that the experiments might just have been underpowered. We conclude that subjects initially evoke a conceptually-driven motor representation of the object, and that only after some time can the depicted form become prominent enough to influence the elicited action representation.
Child Development, 2012
Recent evidence suggests adults and infants selectively attend to features of action, such as how a hand contacts an object. The current research investigated whether this bias stems from infants' processing of the functional consequences of grasps: understanding that different grasps afford different future actions. A habituation paradigm assessed 10-month-old infants' (N = 62) understanding of the functional consequences of precision and whole-hand grasps in others' actions, and infants' own precision grasping abilities were also assessed. The results indicate infants understood the functional consequences of another's grasp only if they could perform precision grasps themselves. These results highlight a previously unknown aspect of early action understanding, and deepen our understanding of the relation between motor experience and cognition.
Journal of Experimental Psychology: Learning, Memory, and Cognition
Seeing pictures of objects activates the motor cortex and can have an influence on subsequent grasping actions. However, the exact nature of the motor representations evoked by these pictures is unclear. For example, action plans engaged by pictures could be most affected by direct visual input and computed online based on object shape. Alternatively, action plans could be influenced by experience seeing and grasping these objects. We provide evidence for a dual-route theory of action representations evoked by pictures of objects, suggesting that these representations are influenced by both direct visual input and stored knowledge. We find that that familiarity with objects has a facilitative effect on grasping actions, with knowledge about the object's canonical orientation or its name speeding grasping actions for familiar objects compared to novel objects. Furthermore, the strength of contributions from each route to action can be modulated by the manner in which the objects are attended. Thus, evocation of grasping representations depends on an interaction between one's familiarity with perceived objects and how those objects are attended while making grasp actions.
Cortex, 2017
Preparing to grasp objects facilitates visual processing of object location, orientation and size, compared to preparing actions such as pointing. This influence of action on perception reflects mechanisms of selection in visual perception tuned to current action goals, such that action relevant sensory information is prioritized relative to less relevant information. In three experiments, rather than varying movement type (grasp vs point), the magnitude of a prepared movement (power vs precision grasps) was manipulated while visual processing of object size, as well as local/global target detection was measured. Early event-related potentials (ERP) elicited by task-irrelevant visual probes were enhanced for larger probes during power grasp preparation and smaller probes during precision grasp preparation. Local targets were detected faster following precision, relative to power grasp cues. The results demonstrate a direct influence of grasp preparation on sensory processing of size and suggest that the hierarchical dimension of objects may be a relevant perceptual feature for grasp programming. To our knowledge, this is the first evidence that preparing different magnitudes of the same basic action has systematic effects on visual processing.
Cognitive Systems Research, 2011
The Grasping Affordance Model (GAM) introduced here provides a computational account of perceptual processes enabling one to identify grasping action possibilities from visual scenes. GAM identifies the core of affordance perception with visuo-motor transformations enabling one to associate features of visually presented objects to a collection of hand grasping configurations. This account is coherent with neuroscientific models of relevant visuo-motor functions and their localization in the monkey brain. GAM differs from other computational models of biological grasping affordances in the way of modeling focus, functional account, and tested abilities. Notably, by learning to associate object features to hand shapes, GAM generalizes its grasp identification abilities to a variety of previously unseen objects. Even though GAM information processing does not involve semantic memory access and full-fledged object recognition, perceptions of (grasping) affordances are mediated there by substantive computational mechanisms which include learning of object parts, selective analysis of visual scenes, and guessing from experience. Cognitive Systems Research xxx (2010) xxx-xxx Please cite this article in press as: Prevete, R., et al. Perceiving affordances: A computational investigation of grasping affordances.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Experimental Brain Research, 2015
Experimental Brain Research, 2006
Quarterly Journal of Experimental Psychology, 2018
Neuropsychologia, 2011
Experimental Brain Research, 2008
Experimental Brain …, 2000
Journal of neurophysiology, 2012
Lecture Notes in Computer Science, 2009
Learning & Memory, 2004
Journal of Experimental Psychology: Human Perception and Performance, 2017
Acta psychologica, 2017
Neuropsychologia, 2002
Human Movement Science, 2011