Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
The perception of tool-object pairs involves understanding their action-relationships (affordances). Here, we sought to evaluate how an observer visually encodes tool-object affordances. To this end, eye-movements were recorded as right-handed participants freely viewed static, right-handed, egocentric tool-object images across 3 contexts: correct (hammer-nail), incorrect (hammer-paper), spatial/ambiguous (hammer-wood), and 3 grasp-types: no hand, functional grasp-posture (grasp hammer-handle), non-functional/manipulative grasp-posture (grasp hammer-head). There were three Areas of Interests (AOI): the object (nail), the operant tool-end (hammer-head), the graspable tool-end (hammer-handle). Participants passively evaluated whether tool-object pairs were functionally correct/incorrect. Clustering of gaze scanpaths and AOI weightings grouped conditions into three distinct grasp-specific clusters, especially across correct and spatial tool-object contexts and to a lesser extent within the incorrect tool-object context. Permutation tests revealed that the grasp-specific gaze scanpath clusters were reasonably robust to the temporal order of gaze scanpaths. Gaze was therefore automatically primed to grasp-affordances though the task required evaluating tool-object context. Participants also primarily focused on the object and the operant tool-end and sparsely attended to the graspable tool-end, even in images with functional grasp-postures. In fact, in the absence of a grasp, the object was foveally weighted the most, indicative of a possible object-oriented action priming effect wherein the observer may be evaluating how the tool engages on the object. Unlike the functional grasp-posture, the manipulative grasp-posture was a gaze attractor and caused the greatest disruption in the object-oriented priming effect, ostensibly as it does not afford tool-object action due to its non-functional interaction with the operant tool-end that actually engages with the object (e.g. hammer-head to nail). The enhanced attention may therefore serve to encode the intent of the manipulative grasp-posture. Results here show how contextual and grasp-specific affordances directly modulate how an observer gathers action-information when evaluating static tool-object scenes.
Experimental Brain Research, 2013
Mentally representing manipulable objects involves automatic encoding of their corresponding affordances-options for interacting with the object. Two experiments investigated how activation of objects' manual affordances is triggered by visual and linguistic cues and whether graspable object parts play a special role in this process. First, analysis of participants' motor and oculomotor behaviour confirmed that perceptual and linguistic cues potentiate activation of grasp affordances. Second, a differential visualattention mechanism is proposed for the activation of individual compatibility effects associated with target and distractor objects. Third, we registered an implicit attention attraction effect from an object's handles suggesting that graspable parts automatically attract attention during object identification. Fourth, this effect was further amplified by visual but not linguistic cueing manipulations. The latter finding confirms a recent hypothesis about differential roles of visual and linguistic information about perceived objects and the resulting action planning processes. Our results inform current theories of vision for action.
Learning about the function and use of tools through observation requires the ability to exploit one's own knowledge derived from past experience. It also depends on the detection of low-level local cues that are rooted in the tool's perceptual properties. Best known as ‘affordances’, these cues generate biomechanical priors that constrain the number of possible motor acts that are likely to be performed on tools. The contribution of these biomechanical priors to the learning of tool-use behaviors is well supported. However, it is not yet clear if, and how, affordances interact with higher-order expectations that are generated from past experience – i.e. probabilistic exposure – to enable observational learning of tool use. To address this question we designed an action observation task in which participants were required to infer, under various conditions of visual uncertainty, the intentions of a demonstrator performing tool-use behaviors. Both the probability of observing the demonstrator achieving a particular tool function and the biomechanical optimality of the observed movement were varied. We demonstrate that biomechanical priors modulate the extent to which participants' predictions are influenced by probabilistically-induced prior expectations. Biomechanical and probabilistic priors have a cumulative effect when they ‘converge’ (in the case of a probabilistic bias assigned to optimal behaviors), or a mutually inhibitory effect when they actively ‘diverge’ (in the case of probabilistic bias assigned to suboptimal behaviors).
PLoS ONE, 2011
Background: Substantial literature has demonstrated that how the hand approaches an object depends on the manipulative action that will follow object contact. Little is known about how the placement of individual fingers on objects is affected by the end-goal of the action.
2012
Prior research has linked visual perception of tools with plausible motor strategies. Thus, observing a tool activates the putative action-stream, including the left posterior parietal cortex. Observing a hand functionally grasping a tool involves the inferior frontal cortex. However, tool-use movements are performed in a contextual and grasp specific manner, rather than relative isolation.
Attention Perception & Psychophysics, 2010
This study explored whether functional properties of the hand and tools influence the allocation of spatial attention. In four experiments that used a visual-orienting paradigm with predictable lateral cues, hands or tools were placed near potential target locations. Results showed that targets appearing in the hand’s grasping space (i.e., near the palm) and the rake’s raking space (i.e., near the prongs) produced faster responses than did targets appearing to the back of the hand, to the back of the rake, or near the forearm. Validity effects were found regardless of condition in all experiments, but they did not interact with the target-in-grasping/raking-space bias. Thus, the topology of the facilitated space around the hand is, in part, defined by the hand’s grasping function and can be flexibly extended by functional experience using a tool. These findings are consistent with the operation of bimodal neurons, and this embodied component is incorporated into a neurally based model of spatial attention.
Neuropsychologia, 2009
Neuropsychologia, 2014
The term affordance defines a property of objects, which relates to the possible interactions that an agent can carry out on that object. In monkeys, canonical neurons encode both the visual and the motor properties of objects with high specificity. However, it is not clear if in humans exists a similarly fine-grained description of these visuomotor transformations. In particular, it has not yet been proven that the processing of visual features related to specific affordances induces both specific and early visuomotor transformations, given that complete specificity has been reported to emerge quite late (300-450ms). In this study, we applied an adaptation-stimulation paradigm to investigate early cortico-spinal facilitation and hand movements׳ synergies evoked by the observation of tools. We adapted, through passive observation of finger movements, neuronal populations coding either for precision or power grip actions. We then presented the picture of one tool affording one of the...
Ecological psychology, 2001
Whether an object can be used to satisfy a given tool user's intention depends on, among other things, the object's inertial properties. Overcoming an object's rotational inertia is key in controlling a handheld object with respect to a given intention. Manipulating an object by means of muscular exertion is the domain of dynamic touch. Thus, the affordances of a given object as a tool should be perceivable by means of dynamic touch. In 3 experiments, we investigated the inertial variables that support perception of 2 potential affordances of handheld tools: hammer-with-ability and poke-with-ability. The results suggest that ratings of hammers are dependent on the volume of the inertial ellipsoid in such a way that supports the transference of power to the struck surface. Ratings of pokers are dependent on the same quantity but in a way that supports controllability of the poking object. Additionally, results suggest that minimal experience in a given tool-using task may "tune" tool users to the inertial properties required of a given tool for a given function.
Human Movement Science, 2011
The purpose of this study was to investigate the influence of object function and the observer's action capabilities on grasp facilitation. We used a stimulus-response compatibility (SRC) protocol in which participants were asked to reach and grasp a drinking glass using one of two grasps -the thumb-up or the thumb-down grasp. The reaction time (RT) was used as the index of grasp facilitation. In Experiment 1, we found evidence for the facilitation of ''functionally relevant'' grasps -where the type of grasp facilitated depended on the location of opening but not the shape of the object. However, this effect was found only when attention was directed toward the location of the opening. In Experiments 2 and 3, we found that this facilitation was also affected by whether participants had the ability to functionally interact with the object. These results show that S-R compatibilities are influenced both by the object's function and the actor's action capabilities, and are interpreted in framework of affordances.
PLoS ONE, 2013
Understanding the interactions of visual and proprioceptive information in tool use is important as it is the basis for learning of the tool's kinematic transformation and thus skilled performance. This study investigated how the CNS combines seen cursor positions and felt hand positions under a visuo-motor rotation paradigm. Young and older adult participants performed aiming movements on a digitizer while looking at rotated visual feedback on a monitor. After each movement, they judged either the proprioceptively sensed hand direction or the visually sensed cursor direction. We identified asymmetric mutual biases with a strong visual dominance. Furthermore, we found a number of differences between explicit and implicit judgments of hand directions. The explicit judgments had considerably larger variability than the implicit judgments. The bias toward the cursor direction for the explicit judgments was about twice as strong as for the implicit judgments. The individual biases of explicit and implicit judgments were uncorrelated. Biases of these judgments exhibited opposite sequential effects. Moreover, age-related changes were also different between these judgments. The judgment variability was decreased and the bias toward the cursor direction was increased with increasing age only for the explicit judgments. These results indicate distinct explicit and implicit neural representations of hand direction, similar to the notion of distinct visual systems.
Cortex, 2020
The ability to build and expertly manipulate manual tools sets humans apart from other animals. Watching images of manual tools has been shown to elicit a distinct pattern of neural activity in a network of parietal areas, assumingly because tools entail a potential for actionda unique feature related to their functional use and not shared with other manipulable objects. However, a question has been raised whether this selectivity reflects a processing of low-level visual propertiesdsuch as elongated shape that is idiosyncratic to most tool-objectsdrather than action-specific features. To address this question, we created and behaviourally validated a stimulus set that dissociates objects that are manipulable and nonmanipulable, as well as objects with different degrees of body extension property (tools and non-tools), while controlling for object shape and low-level image properties. We tested the encoding of action-related features by investigating neural representations in two parietal regions of interest (intraparietal sulcus and superior parietal lobule) using functional MRI. Univariate differences between tools and non-tools were not observed when controlling for visual properties, but strong evidence for the action account was nevertheless revealed when using a multivariate approach. Overall, this study provides further evidence that the representational content in the dorsal visual stream reflects encoding of action-specific properties.
Neuroscience, 2018
The ability to recognize a tool's affordances (how a spoon should be appropriately grasped and used), is vital for daily life. Prior research has identified parietofrontal circuits, including mirror neurons, to be critical in understanding affordances. However, parietofrontal action-encoding regions receive extensive visual input and are adjacent to parietofrontal attention control networks. It is unclear how eye movements and attention modulate parietofrontal encoding of affordances. To address this issue, scenes depicting tools in different use-contexts and grasp-postures were presented to healthy subjects across two experiments, with stimuli durations of 100 ms or 500 ms. The 100-ms experiment automatically restricted saccades and required covert attention, while the 500-ms experiment allowed overt attention. The two experiments elicited similar behavioral decisions on tool-use correctness and isolated the influence of attention on parietofrontal activity. Parietofrontal ERPs (P600) distinguishing tool-use contexts (e.g., spoon-yogurt vs. spoon-ball) were similar in both experiments. Conversely, parietofrontal ERPs distinguishing tool-grasps were characterized by posterior to frontal N130-N200 ERPs in the 100-ms experiment and by saccade-perturbed N130-N200 ERPs, frontal N400 and parietal P500 in the 500-ms experiment. Particularly, only overt gaze toward the hand-tool interaction engaged mirror neurons (frontal N400) when discerning grasps that manipulate but not functionally use a tool-(grasp bowl rather than stem of spoon). Results here detail the first human electrophysiological evidence on how attention selectively modulates multiple parietofrontal grasp-perception circuits, especially the mirror neuron system, while unaffecting pari-etofrontal encoding of tool-use contexts. These results are pertinent to neurophysiological models of affordances that typically neglect the role of attention in action perception.
Brain and cognition, 2011
How do humans interact with tools? Gibson (1979) suggested that humans perceive directly what tools afford in terms of meaningful actions. This “affordances” hypothesis implies that visual objects can potentiate motor responses even in the absence of an intention to act. Here we explore the temporal evolution of motor plans afforded by common objects. We presented objects that have a strong significance for action (pinching and grasping) and objects with no such significance. Two experimental tasks involved participants viewing objects presented on a computer screen. For the first task, they were instructed to respond rapidly to changes in background colour by using an apparatus mimicking precision and power grip responses. For the second task, they received stimulation of their primary motor cortex using transcranial magnetic stimulation (TMS) while passively viewing the objects. Muscular responses (motor evoked potentials: MEPs) were recorded from two intrinsic hand muscles (associated with either a precision or power grip). The data showed an interaction between type of response (or muscle) and type of object, with both reaction time and MEP measures implying the generation of a congruent motor plan in the period immediately after object presentation. The results provide further support for the notion that the physical properties of objects automatically activate specific motor codes, but also demonstrate that this influence is rapid and relatively short lived.► How do objects automatically activate specific motor plans known as “affordances”? ► Task-irrelevant pictures shown to activate congruent grip actions. ► Affordance effect evident in both RTs and motor evoked potentials. ► Affordance effect arises rapidly and also dissipates quickly. ► Affordance effect evident for separate hand actions generated in the same hemisphere.
Lecture Notes in Computer Science, 2009
Classically, visual attention is assumed to be influenced by visual properties of objects, e. g. as assessed in visual search tasks. However, recent experimental evidence suggests that visual attention is also guided by action-related properties of objects ("affordances", Gibson, 1966("affordances", Gibson, , 1979, e. g. the handle of a cup affords grasping the cup; therefore attention is drawn towards the handle (see , for an example). In a first step towards modelling this interaction between attention and action, we implemented the Selective Attention for Action model (SAAM). The design of SAAM is based on the Selective Attention for Identification model (SAIM, Heinke & Humphreys, 2003). For instance, we also followed a soft-constraint satisfaction approach in a connectionist framework. However, SAAM's selection process is guided by locations within objects suitable for grasping them whereas SAIM selects objects based on their visual properties. In order to implement SAAM's selection mechanism two sets of constraints were implemented. The first set of constraints took into account the anatomy of the hand, e. g. maximal possible distances between fingers. The second set of constraints (geometrical constraints) considered suitable contact points on objects by using simple edge detectors. At first, we demonstrate that SAAM can successfully mimic human behaviour by comparing simulated contact points with experimental data. Secondly, we show that SAAM simulates affordance-guided attentional behaviour as it successfully generates contact points for only one object in two-object images. (precision or power grip) can influence subsequent categorisation of objects. In their study, participants had to categorise objects into either artefact or natural object. Additionally, and unknown to the participants, the objects could be manipulated with either a precision or a power grasp. showed that categorisation was faster when the hand postures were congruent with the grasp compared to hand postures being incongruent with the grasp. Hence, the participants' behaviour was influenced by action-related properties of objects irrelevant to the experimental task. This experiment together with earlier, similar studies can be interpreted as evidence for an automatic detection of affordance.
Quarterly Journal of Experimental Psychology, 2011
Viewing objects can result in automatic, partial activation of motor plans associated with them—“object affordance”. Here, we recorded grip force simultaneously from both hands in an object affordance task to investigate the effects of conflict between coactivated responses. Participants classified pictures of objects by squeezing force transducers with their left or right hand. Responses were faster on trials where the object afforded an action with the same hand that was required to make the response (congruent trials) compared to the opposite hand (incongruent trials). In addition, conflict between coactivated responses was reduced if it was experienced on the preceding trial, just like Gratton adaptation effects reported in “conflict” tasks (e.g., Eriksen flanker). This finding suggests that object affordance demonstrates conflict effects similar to those shown in other stimulus–response mapping tasks and thus could be integrated into the wider conceptual framework on overlearnt stimulus–response associations. Corrected erroneous responses occurred more frequently when there was conflict between the afforded response and the response required by the task, providing direct evidence that viewing an object activates motor plans appropriate for interacting with that object. Recording continuous grip force, as here, provides a sensitive way to measure coactivated responses in affordance tasks.
Experimental Brain Research, 2019
It has previously been demonstrated that tool recognition is facilitated by the repeated visual presentation of object features affording actions, such as those related to grasping and their functional use. It is unclear, however, if this can also facilitate pantomiming. Participants were presented with an image of a prime followed by a target tool and were required to pantomime the appropriate action for each one. The grasp and functional use attributes of the target tool were either the same or different to the prime. Contrary to expectations, participants were slower at pantomiming the target tool relative to the prime regardless of whether the grasp and function of the tool were the same or different-except when the prime and target tools consisted of identical images of the same exemplar. We also found a decrease in accuracy of performing functional use actions for the target tool relative to the prime when the two differed in functional use but not grasp. We reconcile differences between our findings and those that have performed priming studies on tool recognition with differences in task demands and known differences in how the brain recognises tools and performs actions to make use of them.
Journal of Experimental Psychology: Human Perception and Performance, 2011
At issue in the present series of experiments was the ability to prospectively perceive the action-relevant properties of hand-held tools by means of dynamic touch. In Experiment 1, participants judged object move-ability. In Experiment 2, participants judged how difficult an object would be to hold if held horizontally, and in Experiments 3 and 4, participants rated how fast objects could be rotated. In each experiment, the first and second moments of mass distribution of the objects were systematically varied. Manipulations of wielding speed and orientation during restricted exploration revealed perception to be constrained by (a) the moments of mass distribution of the hand-tool system, (b) the qualities of exploratory wielding movements, and (c) the intention to perceive each specific property. The results are considered in the context of the ecological theory of dynamic touch. Implications for accounts of the informational basis of dynamic touch and for the development of a theory of haptically perceiving the affordance properties of tools are discussed.
palm.mindmodeling.org
Recently, researchers have suggested that when we see an object we automatically represent how that object affords action . However, the precise nature of this representation remains unclear: is it a specific motor plan or a more abstract response code? Furthermore, do action representations actually influence what we perceive? In Experiment 1, participants responded to an image of an object and then made a laterality judgment about an image of a hand. Hand identification was fastest when the hand corresponded to both the orientation and grasp type of the object, suggesting that affordances are represented as specific action plans. In Experiment 2, participants saw an image of a hand before interpreting an ambiguous object drawing. Responses were biased towards the interpretation that was congruent with the grasp type of the hand prime. Together, these results suggest that action representations play a critical role in object perception.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.