Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2013, Psychological research
We investigated eye movements during long-term pictorial recall. Participants performed a perceptual encoding task, in which they memorized 16 stimuli that were displayed in different areas on a computer screen. After the encoding phase the participants had to recall and visualize the images and answer to specific questions about visual details of the stimuli. One week later the participants repeated the pictorial recall task. Interestingly, not only in the immediate recall task but also 1 week later participants looked longer at the areas where the stimuli were encoded. The major contribution of this study is that memory for pictorial objects, including their spatial location, is stable and robust over time.
Perception & Psychophysics, 2005
Because visual perception has temporal extent, temporally discontinuous input must be linked in memory. Recent research has suggested that this may be accomplished by integrating the active contents of visual short-term memory (VSTM) with subsequently perceived information. In the present experiments, we explored the relationship between VSTM consolidation and maintenance and eye movements, in order to discover how attention selects the information that is to be integrated. Specifically, we addressed whether stimuli needed to be overtly attended in order to be included in the memory representation or whether covert attention was sufficient. Results demonstrated that in static displays in which the to-be-integrated information was presented in the same spatial location, VSTM consolidation proceeded independently of the eyes, since subjects made few eye movements. In dynamic displays, however, in which the to-be-integrated information was presented in different spatial locations, eye movements were directly related to task performance. We conclude that these differences are related to different encoding strategies. In the static display case, VSTM was maintained in the same spatial location as that in which it was generated. This could apparently be accomplished with covert deployments of attention. In the dynamic case, however, VSTM was generated in a location that did not overlap with one of the to-be-integrated percepts. In order to "move" the memory trace, overt shifts of attention were required.
When attempting to recall previously seen visual information, people often move their eyes to the same locations where they initially viewed it. These eye-movements are thought to serve a role in enhancing memory retrieval, although the exact mechanism underlying this effect is yet unknown. To investigate this link between eye-movements and memory, we conducted an experiment with 80 adult participants. Participants were asked to perform a memory retrieval task, while viewing either the same visual context as during encoding or an altered one.Results showed that the benefit of eye movements to memory retrieval was dependent on the visual input. This suggests that the contribution of eye-movements to memory may not be from the motor behavior itself, but from its visual consequences. Our findings thus challenge the hypothesis that eye movements act as a motor retrieval cue and support the view that their visual consequences act as a sensory one.Statement of RelevanceAn intriguing quest...
Vision Research, 2001
An unresolved question is how much information can be remembered from visual scenes when they are inspected by saccadic eye movements. Subjects used saccadic eye movements to scan a computer-generated scene, and afterwards, recalled as many objects as they could. Scene memory was quite good: it improved with display duration, it persisted over time long after the display was removed, and it continued to accumulate with additional viewings of the same display (Melcher, D. The persistance of memory for scenes. Nature 412, 401). The occurrence of saccadic eye movements was important to ensure good recall performance, even though subjects often recalled non-fixated objects. Inter-saccadic intervals increased with display duration, showing an influence of duration on global scanning strategy. The choice of saccadic target was predicted by a Random Selection with Distance Weighting (RSDW) model, in which the target for each saccade is selected at random from all available objects, weighted according to distance from fixation, regardless of which objects had previously been fixated. The results show that the visual memory that was reflected in the recall reports was not utilized for the immediate decision about where to look in the scene. Visual memory can be excellent, but it is not always reflected in oculomotor measures, perhaps because the cost of rapid on-line memory retrieval is too great.
Perception, 2012
Recent behavioural and biological evidence indicates common mechanisms serving working memory and attention (eg Awh et al, 2006 Neuroscience 139 201-208). This study explored the role of spatial attention and visual search in an adapted Corsi spatial memory task. Eye movements and touch responses were recorded from participants who recalled locations (signalled by colour or shape change) from an array presented either simultaneously or sequentially. The time delay between target presentation and recall (0, 5, or 10 s) and the number of locations to be remembered (2-5) were also manipulated. Analysis of the response phase revealed subjects were less accurate (touch data) and fixated longer (eye data) when responding to sequentially presented targets suggesting higher cognitive effort. Fixation duration on target at recall was also influenced by whether spatial location was initially signalled by colour or shape change. Finally, we found that the sequence tasks encouraged longer fixations on the signalled targets than simultaneous viewing during encoding, but no difference was observed during recall. We conclude that the attentional manipulations (colour/shape) mainly affected the eye movement parameters, whereas the memory manipulation (sequential versus simultaneous, number of items) mainly affected the performance of the hand during recall, and thus the latter is more important for ascertaining if an item is remembered or forgotten. In summary, the nature of the stimuli that is used and how it is presented play key roles in determining subject performance and behaviour during spatial memory tasks.
Memory & Cognition, 1974
Eye fixations were recorded at viewing of picture-label stimuli presented under either recall or recognition instructions; both retention tests were administered. Ss performed substantially better on the retention test of which they were informed, indicating differential encoding of the same stimuli in anticipation of test type. There was no correlation between recognition and recall of items, evidence that different information from the encoded stimuli was utilized in performing each test. Encoding strategies had no effect on how Ss regarded the stimuli, but viewing patterns were related to memory performance: More word fixations was associated with better verbal recall, while fewer picture fixations was associated with better recall and with better picture recognition.
PloS one, 2012
This study investigated whether "intentional" instructions could improve older adults' object memory and object-location memory about a scene by promoting object-oriented viewing. Eye movements of younger and older adults were recorded while they viewed a photograph depicting 12 household objects in a cubicle with or without the knowledge that memory about these objects and their locations would be tested (intentional vs. incidental encoding). After viewing, participants completed recognition and relocation tasks. Both instructions and age affected viewing behaviors and memory. Relative to incidental instructions, intentional instructions resulted in more accurate memory about object identity and object-location binding, but did not affect memory accuracy about overall positional configuration. More importantly, older adults exhibited more object-oriented viewing in the intentional than incidental condition, supporting the environmental support hypothesis.
Attention, Perception, & Psychophysics, 2005
A gaze-contingent short-term memory paradigm was used to obtain forgetting functions for realistic objects in scenes. Experiment 1 had observers freely view nine-item scenes. After observers' gaze left a predetermined target, they could fixate from 1-7 intervening nontargets before the scene was replaced by a spatial probe at the target location. The task was then to select the target from four alternatives. A steep recency benefit was found over the 1-2 intervening object range that declined into an above-chance prerecency asymptote over the remainder of the forgetting function. In Experiment 2, we used sequential presentation and variable delays to explore the contributions of decay and extrafoveal processes to these behaviors. We conclude that memory for objects in scenes, when serialized by fixation sequence, shows recency and prerecency effects that are similar to isolated objects presented sequentially over time. We discuss these patterns in the context of the serial order memory literature and object file theory.
Experimental brain research, 2017
Visuospatial working memory (VSWM) is a set of cognitive processes used to encode, maintain and manipulate spatial information. One important feature of VSWM is that it has a limited capacity such that only few items can be actively stored and manipulated simultaneously. Given the limited capacity, it is important to determine the conditions that affect memory performance as this will improve our understanding of the architecture and function of VSWM. Previous studies have shown that VSWM is disrupted when task-irrelevant eye movements are performed during the maintenance phase; however, relatively fewer studies examined the role of eye movements performed during the encoding phase. On one hand, performing eye movements during the encoding phase could result in a stronger memory trace because the memory formation is reinforced by the activation of the motor system. On the other hand, performing eye movements to each target could disrupt the configural processing of the spatial array...
Cognitive Computation, 2010
We construct a mathematical model indicating the information available to the observer regarding each item in a spatial unit in the visual scene, at any given moment, dependant on previous fixations and eye movement scanpath. Dividing the items in the visual scene into discrete units, two processes affect the amount of information available about each unit at each time: an incremental component and a memory decay component. We assume that extracted information is incremented for a scene unit each time the fixation falls on it and increases as a sigmoid or exponential growth function with subsequent fixations and that information decays exponentially between recurring fixations on a unit. Results show that the amount of available information grows and fluctuates stochastically, varying from unit to unit and from time to time, so that complete information is never reached for the entire scene. Simulations show that the larger the scene, the less information is available, on average, for each scene unit, though more may be available in total, as well as more for favored units. The resulting dynamics of this local available information measure might predict the probability of perceiving a change in stimulation at a corresponding visual unit or its complementary, the probability of change blindness.
Eye movements during mental imagery are not epiphenomenal but assist the process of image generation. Commands to the eyes for each fixation are stored along with the visual representation and are used as spatial index in a motor-based coordinate system for the proper arrangement of parts of an image. In two experiments, subjects viewed an irregular checkerboard or color pictures of fish and were subsequently asked to form mental images of these stimuli while keeping their eyes open. During the perceptual phase, a group of subjects was requested to maintain fixation onto the screen’s center, whereas another group was free to inspect the stimuli. During the imagery phase, all of these subjects were free to move their eyes. A third group of subjects (in Experiment 2) was free to explore the pattern but was requested to maintain central fixation during imagery. For subjects free to explore the pattern, the percentage of time spent fixating a specific location during perception was highly correlated with the time spent on the same (empty) locations during imagery. The order of scanning of these locations during imagery was correlated to the original order during perception. The strength of relatedness of these scanpaths and the vividness of each image predicted performance accuracy. Subjects who fixed their gaze centrally during perception did the same spontaneously during imagery. Subjects free to explore during perception, but maintaining central fixation during imagery, showed decreased ability to recall the pattern. We conclude that the eye scanpaths during visual imagery reenact those of perception of the same visual scene and that they play a functional role. © 2002 Cognitive Science Society, Inc. All rights reserved.
2012
By using eye movement monitoring (EMM) techniques, investigators have been able to examine the processes that support relational memory as they occur online. However, EMM studies have only focused on memory for spatial relations, producing a lack of EMM evidence for temporal relations. Thus, in the present study, participants performed a recognition memory task with stimuli that varied in their spatial and temporal relations. They were presented with a sequence of objects in a unique spatial configuration, and were instructed to either detect changes in the spatial or temporal relations between study and test presentations. The results provide novel EMM evidence for an interaction between spatial and temporal memory, and the obligatory effects of relational memory processes on eye movement behaviours. Moreover, the current study was also able to test predictions from the temporal context model (Howard & Kahana, 2002), and found evidence for a temporal contiguity effect.
Consciousness and cognition, 2015
This study examined whether the eye movement can be used to measure memory of past events and its relationship with the explicit measures. In Experiment 1, after studying a list of Chinese characters, the participants received a recognition memory test. For each trial the participants had to indicate, among one studied character and two nonstudied homonyms, which character they had studied. Participants' eye movements were monitored while they viewed the three-character test display. Both the time-course and response-locked measures showed that participants viewed the studied character longer than the nonstudied character regardless of their explicit response. Experiment 2 used a wagering task to assess participants' conscious awareness and found that wagering points predicted viewing time for the target better than the recognition accuracy did. These findings suggest that the effect of memory on viewing time occurs automatically and is weakly associated with subsequent cons...
Psychonomic Bulletin & Review, 2021
Similarity-based semantic interference (SI) hinders memory recognition. Within long-term visual memory paradigms, the more scenes (or objects) from the same semantic category are viewed, the harder it is to recognize each individual instance. A growing body of evidence shows that overt attention is intimately linked to memory. However, it is yet to be understood whether SI mediates overt attention during scene encoding, and so explain its detrimental impact on recognition memory. In the current experiment, participants watched 372 photographs belonging to different semantic categories (e.g., a kitchen) with different frequency (4, 20, 40 or 60 images), while being eye-tracked. After 10 minutes, they were presented with the same 372 photographs plus 372 new photographs and asked whether they recognized (or not) each photo (i.e., old/new paradigm). We found that the more the SI, the poorer the recognition performance, especially for old scenes of which memory representations existed. Scenes more widely explored were better recognized, but for increasing SI, participants focused on more local regions of the scene in search for its potentially distinctive details. Attending to the centre of the display, or to scene regions rich in low-level saliency was detrimental to recognition accuracy, and as SI increased participants were more likely to rely on visual saliency. The complexity of maintaining faithful memory representations for increasing SI also manifested in longer fixation durations; in fact, a more successful encoding was also associated with shorter fixations. Our study highlights the interdependence between attention and memory during high-level processing of semantic information.
Psychological Science, 2009
The amount of time viewers could process a scene during eye fixations was varied by a mask that appeared at a certain point in each eye fixation. The scene did not reappear until the viewer made an eye movement. The main finding in the studies was that in order to normally process a scene, viewers needed to see the scene for at least 150 ms during each eye fixation. This result is surprising because viewers can extract the gist of a scene from a brief 40-to 100-ms exposure. It also stands in marked contrast to reading, as readers need only to view the words in the text for 50 to 60 ms to read normally. Thus, although the same neural mechanisms control eye movements in scene perception and reading, the cognitive processes associated with each task drive processing in different ways. The neural mechanisms that underlie oculomotor activity do not vary as a function of the task viewers engage in; there is not one oculomotor system for looking at scenes, another for visual search, and another for reading. Eye movements are essential in these tasks because the eyes must be placed on the part of the scene or text viewers want to process in detail in foveal vision (Henderson, 2003; Rayner, 1998, in press). Does the oculomotor system react in the same way to stimuli in these different tasks? In the present studies, we utilized a gaze-contingent display change paradigm (Henderson &
During memory retrieval, people tend to reenact the same eye movements performed when memorized items were first displayed and to gaze at similar locations. This was hypothesized to reflect the role of eye movements as retrieval cues. However, it is unknown what is it about eye movements that makes them effective retrieval cues. Here, we examine, for the first time, the individual and combined contributions of the visual and the motor components of eye movements to memory retrieval. Results (N=70) revealed a non-additive benefit of both components of eye movements to memory performance. Additionally, we found that individuals who gained from one component, were more likely to gain from the other as well. Together, these findings unravel the central role of eye movements in episodic memory; they show how the visual and motor components are integrated into a single effective memory retrieval cue and how this integration varies among individuals.
bioRxiv (Cold Spring Harbor Laboratory), 2023
When people try to remember visual information, they often move their eyes similarly to encoding. The mechanism underlying this behavior has not yet been fully understood. Specifically, it is unclear whether the purpose of this behavior is to recreate the visual input produced during encoding, or the motor and spatial elements of encoding. In this experiment, participants (N=40) encoded pairs of greyscale objects, overlaying colored squares. During test, participants were asked about objects' orientation, while presented with squares of the same colors, either at the same location (controlled trials) or switched in their locations (test trials) relative to encoding. Results show that during test trials, participants tended to gaze at the square appearing at the location where the remembered object was previously presented, rather than on the square of the same color. This indicates a superiority of motor and spatial elements of eye movements rather than near-peripheral visual cues. Introduction: Our eyes not only allow us to see, but are also involved in various other cognitive tasks. Research has found that people's eyes often move around when they are attempting to solve mathematical problem, think creatively, make decisions and remember previously learned information (
The British journal of developmental psychology, 2011
We investigated eye-movements during preschool children's pictorial recall of seen objects. Thirteen 3- to 4-year-old children completed a perceptual encoding and a pictorial recall task. First, they were exposed to 16 pictorial objects, which were positioned in one of four distinct areas on the computer screen. Subsequently, they had to recall these pictorial objects from memory in order to respond to specific questions about visual details. We found that children spent more time fixating the areas in which the pictorial objects were previously displayed. We conclude that as early as age 3–4 years old, children show specific eye-movements when they recall pictorial contents of previously seen objects.
Cognition, 2014
Gaze was monitored by use of an infrared remote eye-tracker during perception and imagery of geometric forms and figures of animals. Based on the idea that gaze prioritizes locations where features with high information content are visible, we hypothesized that eye fixations should focus on regions that contain one or more local features that are relevant for object recognition. Most importantly, we predicted that when observers looked at an empty screen and at the same time generated a detailed visual image of what they had previously seen, their gaze would probabilistically dwell within regions corresponding to the original positions of salient features or parts. Correlation analyses showed positive relations between gaze's dwell time within locations visited during perception and those in which gaze dwelled during the imagery generation task. Moreover, the more faithful an observer's gaze enactment, the more accurate was the observer's memory, in a separate test, of the dimension or size in which the forms had been perceived. In another experiment, observers saw a series of pictures of animals and were requested to memorize them. They were then asked later, in a recall phase, to answer a question about a property of one of the encoded forms; it was found that, when retrieving from long-term memory a previously seen picture, gaze returned to the location of the part probed by the question. In another experimental condition, the observers were asked to maintain fixation away from the original location of the shape while thinking about the answer, so as to interfere with the gaze enactment process; such a manipulation resulted in measurable costs in the quality of memory. We conclude that the generation of mental images relies upon a process of enactment of gaze that can be beneficial to visual memory.
Attention, Perception & Psychophysics, 2019
We investigated visual working memory encoding across saccadic eye movements, focusing our analysis on refixation behavior. Over 10-s periods, participants performed a visual search for three, four, or five targets and remembered their orientations for a subsequent change-detection task. In 50% of the trials, one of the targets had its orientation changed. From the visual search period, we scored three types of refixations and applied measures for quantifying eye-fixation recurrence patterns. Repeated fixations on the same regions as well as repeated fixation patterns increased with memory load. Correct change detection was associated with more refixations on targets and less on distractors, with increased frequency of recurrence, and with longer intervals between refixations. The results are in accordance with the view that patterns of eye movement are an integral part of visual working memory representation.
Journal of Vision, 2011
The process of encoding a visual scene into working memory has previously been studied using binary measures of recall. Here, we examine the temporal evolution of memory resolution, based on observers' ability to reproduce the orientations of objects presented in brief, masked displays. Recall precision was accurately described by the interaction of two independent constraints: an encoding limit that determines the maximum rate at which information can be transferred into memory and a separate storage limit that determines the maximum fidelity with which information can be maintained. Recall variability decreased incrementally with time, consistent with a parallel encoding process in which visual information from multiple objects accumulates simultaneously in working memory. No evidence was observed for a limit on the number of items stored. Cuing one display item with a brief flash led to rapid development of a recall advantage for that item. This advantage was short-lived if the cue was simply a salient visual event but was maintained if it indicated an object of particular relevance to the task. These cuing effects were observed even for items that had already been encoded into memory, indicating that limited memory resources can be rapidly reallocated to prioritize salient or goal-relevant information.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.