EPSRC logo

Details of Grant 

EPSRC Reference: EP/E054323/1
Title: Integrating 'when' and 'where' in models of saccade target selection
Principal Investigator: Ludwig, Dr C
Other Investigators:
Researcher Co-Investigators:
Project Partners:
NASA University of California Santa Barbara
Department: Experimental Psychology
Organisation: University of Bristol
Scheme: Advanced Fellowship
Starts: 01 March 2008 Ends: 28 February 2013 Value (£): 616,499
EPSRC Research Topic Classifications:
Vision & Senses - ICT appl.
EPSRC Industrial Sector Classifications:
Information Technologies
Related Grants:
Panel History:
Panel DatePanel NameOutcome
24 Apr 2007 ICT Fellowships 2007 - Interviews FinalDecisionYetToBeMade
29 Mar 2007 ICT Fellowships Sift Panel InvitedForInterview
Summary on Grant Application Form
The human visual system is limited in its ability to resolve fine detail. In fact, only in the one or two degrees of central vision (an area of approximately the width of two thumbs at an arm's length) are we able to see with high acuity. Therefore, in order to explore the visual environment humans move their eyes very frequently. These eye movements are called saccades, and we make ~10,000 of these movements every hour of our waking lives.These eye movements are critical for our vision, and therefore for the successful interaction with the world around us. Eye movement researchers are interested in the properties of saccades, and the visual signals that are effective in triggering these movements. The critical question in this regard is: How do people decide where to look next and when to move the current point of fixation to the selected location?Theories of saccadic eye movement generation typically focus on one or the other of these two aspects. That is, sophisticated theories have been developed to explain which spatial locations in a visual scene are fixated. These theories do not explain the order in which the various locations are visited, nor do they account for how long the eyes remain stationary before moving on to the next location. Likewise, theories exist that account for the latency, or reaction time, of saccades in response to the appearance of a single visual target on a blank background. Such theories do not account for which locations are selected for detailed visual scrutiny. Therefore, current theories of saccade target selection are incomplete in that they only consider the spatial or the temporal aspects.The aim of this project is to develop a general theory of saccade target selection that incorporates both the spatial and temporal components. Specifically, the theory will be formulated in such a way that it can be implemented, or simulated, in a computer program. Detailed simulations of such kind are often helpful in that it enables one to make more specific, and sometimes counter-intuitive, predictions that can then be tested in experiments.Testing the theory in this way is an important and substantial part of the project. The experiments proposed generally involve having human observers perform some discrimination task on some visual image. For instance, we can present a number of patterns that contain oriented lines, and ask observers to find the one pattern that consists of vertically oriented lines. Using an eye tracker, we can then monitor which patterns are fixated, in what order, and how long it takes observers to look from one place to the next. These eye movement data are used to confirm or reject the theory's predictions.A unified theory of saccade target selection is intrinsically interesting to scientists who investigate the human eye movement and/or visual systems. In addition, its computational implementation may be relevant in more applied settings. For instance, a major problem in robotics is one of representation: a mobile robot simply cannot store all the information it might want to store about its environment (this amount would rapidly grow out of control). Instead, a solution might be to sample information from the environment if and when it is needed for some task, similar to the way humans sample the visual world using eye movements. Knowledge and theories about how humans do this, can be extremely useful in an engineering context.In addition, the empirical findings generated in the project are not only of theoretical relevance. Knowledge of the kinds of representations human observers can use to drive their eye movements can be important in a variety of settings in which people are required to quickly respond to some visual signals. Here you can think about a pilot flying an aircraft or the staff in an airtraffic control room. In these settings it may be very important to taylor the lay-out of the visual scene to the human who will be operating in that environment.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL: https://sites.google.com/site/visionandmovement/
Further Information:  
Organisation Website: http://www.bris.ac.uk