Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Psychonomic Bulletin & Review
From playing basketball to ordering at a food counter, we frequently and effortlessly coordinate our attention with others towards a common focus: we look at the ball, or point at a piece of cake. This non-verbal coordination of attention plays a fundamental role in our social lives: it ensures that we refer to the same object, develop a shared language, understand each other’s mental states, and coordinate our actions. Models of joint attention generally attribute this accomplishment to gaze coordination. But are visual attentional mechanisms sufficient to achieve joint attention, in all cases? Besides cases where visual information is missing, we show how combining it with other senses can be helpful, and even necessary to certain uses of joint attention. We explain the two ways in which non-visual cues contribute to joint attention: either as enhancers, when they complement gaze and pointing gestures in order to coordinate joint attention on visible objects, or as modality pointe...
Scientific Reports
The coordination of attention between individuals is a fundamental part of everyday human social interaction. Previous work has focused on the role of gaze information for guiding responses during joint attention episodes. However, in many contexts, hand gestures such as pointing provide another valuable source of information about the locus of attention. The current study developed a novel virtual reality paradigm to investigate the extent to which initiator gaze information is used by responders to guide joint attention responses in the presence of more visually salient and spatially precise pointing gestures. Dyads were instructed to use pointing gestures to complete a cooperative joint attention task in a virtual environment. Eye and hand tracking enabled real-time interaction and provided objective measures of gaze and pointing behaviours. Initiators displayed gaze behaviours that were spatially congruent with the subsequent pointing gestures. Responders overtly attended to the...
Perspectives on Psychological Science
When two people look at the same object in the environment and are aware of each other’s attentional state, they find themselves in a shared-attention episode. This can occur through intentional or incidental signaling and, in either case, causes an exchange of information between the two parties about the environment and each other’s mental states. In this article, we give an overview of what is known about the building blocks of shared attention (gaze perception and joint attention) and focus on bringing to bear new findings on the initiation of shared attention that complement knowledge about gaze following and incorporate new insights from research into the sense of agency. We also present a neurocognitive model, incorporating first-, second-, and third-order social cognitive processes (the shared-attention system, or SAS), building on previous models and approaches. The SAS model aims to encompass perceptual, cognitive, and affective processes that contribute to and follow on f...
Quarterly Journal of Experimental Psychology, 2020
Eye movements provide important signals for joint attention. However, those eye movements that indicate bids for joint attention often occur among non-communicative eye movements. This study investigated the influence of these non-communicative eye movements on subsequent joint attention responsivity. Participants played an interactive game with an avatar which required both players to search for a visual target on a screen. The player who discovered the target used their eyes to initiate joint attention. We compared participants’ saccadic reaction times (SRTs) to the avatar’s joint attention bids when they were preceded by non-communicative eye movements that predicted the location of the target (Predictive Search), did not predict the location of the target (Random Search), and when there were no non-communicative eye gaze movements prior to joint attention (No Search). We also included a control condition in which participants completed the same task, but responded to a dynamic a...
2012
Social gaze provides a window into the interests and intentions of others and allows us to actively point out our own. It enables us to engage in triadic interactions involving human actors and physical objects and to build an indispensable basis for coordinated action and collaborative efforts. The object-related aspect of gaze in combination with the fact that any motor act of looking encompasses both input and output of the minds involved makes this non-verbal cue system particularly interesting for research in embodied social cognition. Social gaze comprises several core components, such as gaze-following or gaze aversion. Gaze-following can result in situations of either “joint attention” or “shared attention.” The former describes situations in which the gaze-follower is aware of sharing a joint visual focus with the gazer. The latter refers to a situation in which gazer and gaze-follower focus on the same object and both are aware of their reciprocal awareness of this joint focus. Here, a novel interactive eye-tracking paradigm suited for studying triadic interactions was used to explore two aspects of social gaze. Experiments 1a and 1b assessed how the latency of another person’s gaze reactions (i.e., gaze-following or gaze version) affected participants’ sense of agency, which was measured by their experience of relatedness of these reactions. Results demonstrate that both timing and congruency of a gaze reaction as well as the other’s action options influence the sense of agency. Experiment 2 explored differences in gaze dynamics when participants were asked to establish either joint or shared attention. Findings indicate that establishing shared attention takes longer and requires a larger number of gaze shifts as compared to joint attention, which more closely seems to resemble simple visual detection. Taken together, novel insights into the sense of agency and the awareness of others in gaze-based interaction are provided.
2008
Research has shown that observers automatically align their attention with another's gaze direction. The present study investigates whether inferring another's attended location affects the observer's attention in the same way as observing their gaze direction. In two experiments, we used a laterally oriented virtual human head to prime one of two laterally presented targets. Experiment 1 showed that, in contrast to the agent with closed eyes, observing the agent with open eyes facilitated the observer's alignment of attention with the primed target location. Experiment 2, where either sunglasses or occluders concealed the agent's eye direction, showed that only the agent with the sunglasses facilitated the observer's alignment of attention with the target location. Taken together, the data demonstrate that head orientation alone is not sufficient to trigger a shift in the observer's attention, that gaze direction is crucial to this process, and that inferring the region to which another person is attending does facilitate the alignment of attention.
Axel Seemann (ed.), Joint attention. Perspectives and Developments, MIT Press, Cambridge MA, January 2012, pp. 205-242.
Journal of Pragmatics, 2007
We demonstrate that ''joint attention'', usually conceived of in the psychological sciences as indicative of such minded processes as the capacity for understanding the intentional, goal-directed behavior of others, is fundamentally an interactional process, one that cannot be extricated from the ongoing flow of social activity. We examine very young children's actions of showing objects to others, and explicate the practical procedures by which they draw and sustain another's attention to an object, and convey ''what for''-that is, what another should do in response. At issue is how children in a natural social setting (here, a daycare center) track the activities of others for felicitous moments to present objects, and design and position their actions by reference to the ongoing preoccupations, commitments, and distractions of others. Further, drawing another's attention poses sequential implications for children's actions which structure opportunities for parties (child and other) to display, and modify, their understandings of what sort of social exchange is transpiring between them. #
When interacting with others, we often use bodily signals to communicate. Among these signals, pointing, whether with the eyes or the hands, allows coordinating our attention with others, and the perception of pointing gestures implicates a range of social cognitive processes. Here, we review the brain mechanisms underpinning the perception and understanding of pointing, focusing on eye gaze perception and associated joint attention processes. We consider pointing gesture perception, but leave aside pointing gesture execution as it relates to a distinct area of cognitive neuroscience research. We describe the attention orienting effects of pointing and the neural substrates for the perception of biological cues. We consider the multiple high-level social cognitive processes elicited by pointing gesture perception and examine how pointing gestures are related to the general taxonomy of gestures. We conclude by emphasizing that pointing is a social phenomenon and that a full account of pointing will require an integrative approach taking into account the distinct perspectives from which this phenomenon can be investigated.
NeuroImage, 2011
In hub-satellite collaboration using video, interpreting gaze direction is critical for communication between hub coworkers sitting around a table and their remote satellite colleague. However, 2D video distorts images and makes this interpretation inaccurate. We present GazeLens, a video conferencing system that improves hub coworkers' ability to interpret the satellite worker's gaze. A 360 • camera captures the hub coworkers and a ceiling camera captures artifacts on the hub table. The system combines these two video feeds in an interface. Lens widgets strategically guide the satellite worker's attention toward specific areas of her/his screen allow hub coworkers to clearly interpret her/his gaze direction. Our evaluation shows that GazeLens (1) increases hub coworkers' overall gaze interpretation accuracy by 25.8% in comparison to a conventional video conferencing system, (2) especially for physical artifacts on the hub table, and (3) improves hub coworkers' ability to distinguish between gazes toward people and artifacts. We discuss how screen space can be leveraged to improve gaze interpretation.
Frontiers in psychology, 2018
Humans substantially rely on non-verbal cues in their communication and interaction with others. The eyes represent a "simultaneous input-output device": While we observe others and obtain information about their mental states (including feelings, thoughts, and intentions-to-act), our gaze simultaneously provides information about our own attention and inner experiences. This substantiates its pivotal role for the coordination of communication. The communicative and coordinative capacities - and their phylogenetic and ontogenetic impacts - become fully apparent in triadic interactions constituted in its simplest form by two persons and an object. Technological advances have sparked renewed interest in social gaze and provide new methodological approaches. Here we introduce the 'Social Gaze Space' as a new conceptual framework for the systematic study of gaze behavior during social information processing. It covers all possible categorical states, namely 'partne...
Psychological Science, 2007
When two people discuss something they can see in front of them, what is the relationship between their eye movements? We recorded the gaze of pairs of subjects engaged in live, spontaneous dialogue. Cross-recurrence analysis revealed a coupling between the eye movements of the two conversants. In the first study, we found their eye movements were coupled across several seconds. In the second, we found that this coupling increased if they both heard the same background information prior to their conversation. These results provide a direct quantification of joint attention during unscripted conversation and show that it is influenced by knowledge in the common ground.
Social Neuroscience, 2008
Psychonomic Bulletin & Review, 2010
This often requires spatial referencingthe communication and confirmation of an object's location-and this spatial referencing component of a joint attention task is often time critical. Seconds matter when one person from a search-and-rescue team needs corroboration from another or when two agents monitoring a dynamic environment need to reach consensus on a potential threat. As with communication more generally, coordinating a joint activity such as spatial referencing can be analyzed using grounding theory (Brennan,
2016
Social stimuli are a highly salient source of information, and seem to possess unique qualities that set them apart from other well-known categories. One characteristic is their ability to elicit spatial orienting, whereby directional stimuli like eyegaze and pointing gestures act as exogenous cues that trigger automatic shifts of attention that are difficult to inhibit. This effect has been extended to non-social stimuli, like arrows, leading to some uncertainty regarding whether spatial orienting is specialized for social cues. Using a standard spatial cueing paradigm, we found evidence that both a pointing hand and arrow are effective cues, but that the hand is encoded more quickly, leading to overall faster responses. We then extended the paradigm to include multiple cues in order to evaluate congruent vs. incongruent cues. Our results indicate that faster encoding of the social cue leads to downstream effects on the allocation of attention resulting in faster orienting.
Frontiers in Psychology, 2015
When conversing and collaborating in everyday situations, people naturally and interactively align their behaviors with each other across various communication channels, including speech, gesture, posture, and gaze. Having access to a partner's referential gaze behavior has been shown to be particularly important in achieving collaborative outcomes, but the process in which people's gaze behaviors unfold over the course of an interaction and become tightly coordinated is not well understood. In this paper, we present work to develop a deeper and more nuanced understanding of coordinated referential gaze in collaborating dyads. We recruited 13 dyads to participate in a collaborative sandwich-making task and used dual mobile eye tracking to synchronously record each participant's gaze behavior. We used a relatively new analysis technique—epistemic network analysis—to jointly model the gaze behaviors of both conversational participants. In this analysis, network nodes represent gaze targets for each participant, and edge strengths convey the likelihood of simultaneous gaze to the connected target nodes during a given time-slice. We divided collaborative task sequences into discrete phases to examine how the networks of shared gaze evolved over longer time windows. We conducted three separate analyses of the data to reveal (1) properties and patterns of how gaze coordination unfolds throughout an interaction sequence, (2) optimal time lags of gaze alignment within a dyad at different phases of the interaction, and (3) differences in gaze coordination patterns for interaction sequences that lead to breakdowns and repairs. In addition to contributing to the growing body of knowledge on the coordination of gaze behaviors in joint activities, this work has implications for the design of future technologies that engage in situated interactions with human users.
in Zdravko Radman (ed.), The Hand. An Organon of the Mind, MIT Press, Cambridge MA, May 2013, pp. 303-326.
Previous research has identified a number of coordination processes that enable people to perform joint actions. But what determines which coordination processes joint action partners rely on in a given situ-ation? The present study tested whether varying the shared visual information available to co-actors can trigger a shift in coordination processes. Pairs of participants performed a movement task that required them to synchronously arrive at a target from separate starting locations. When participants in a pair received only auditory feedback about the time their partner reached the target they held their movement duration constant to facilitate coordination. When they received additional visual information about each other's movements they switched to a fundamentally different coordination process, exaggerating the curvature of their movements to communicate their arrival time. These findings indicate that the availability of shared perceptual information is a major factor in determining how individuals coordinate their actions to obtain joint outcomes.
We call those gestures "instrumental" that can enhance certain thinking processes of an agent by offering him representational models of his actions in a virtual space of imaginary performative possibilities. We argue that pointing is an instrumental gesture in that it represents geometrical information on one's own gaze direction (i.e., a spatial model for attentional/ocular fixation/orientation), and provides a ritualized template for initiating gaze coordination and joint attention. We counter two possible objections, asserting respectively that the representational content of pointing is not constitutive, but derived from language, and that pointing directly solicits gaze coordination, without representing it. We consider two studies suggesting that attention and spatial perception are actively modified by one's own pointing activity: the first study shows that pointing gestures help children link sets of objects to their corresponding number words; the second, that adults are faster and more accurate in counting when they point.
2012
Joint attention is located at the intersection of a complex set of capacities that serve our cognitive, emotional and action‐oriented relations with others. In one regard, it involves social cognition, our ability to understand others, what they intend, and what their actions mean. Here there is a two‐way relationship between joint attention and social cognition.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.