EPSRC Reference: |
EP/E037372/1 |
Title: |
Natural dynamic scenes and human vision |
Principal Investigator: |
Troscianko, Professor T |
Other Investigators: |
|
Researcher Co-Investigators: |
|
Project Partners: |
|
Department: |
Experimental Psychology |
Organisation: |
University of Bristol |
Scheme: |
Standard Research |
Starts: |
01 July 2007 |
Ends: |
30 June 2010 |
Value (£): |
313,391
|
EPSRC Research Topic Classifications: |
Vision & Senses - ICT appl. |
|
|
EPSRC Industrial Sector Classifications: |
|
Related Grants: |
|
Panel History: |
|
Summary on Grant Application Form |
There is a widespread, and reasonable, assumption that our visual system has developed to see and interpret our natural surroundings. This has been investigated in the past by studying the relationship between the structure of natural scenes and the properties of biological systems looking at those scenes. However, much past work has suffered from two assumptions known to be false: first, that nothing moves in the scenes, and second, that the observers do not move their eyes. The reason for this over-simplification has been the technical difficulty of adding these important variables. We have assembled a team of researchers in two Universities (Bristol and Cambridge) who, together, have the necessary expertise to take on this task. We will collect a large number of images and video clips of outdoor scenes in which there is natural movement, such as leaves rustling in the wind, or of objects in motion. We will study how the information in these scenes is encoded by the visual brain, both with theoretical models and with experiments involving human observers looking at the video clips and having to decide whether or not successive video clips are the same, or different from each other.What makes the modelling challenging is the second issue to be explored here - namely, that we move our gaze to a particular place because only one part of our retina, the fovea, has high spatial resolution. The eye movements that we make provide us with sequential information about a scene. We want our model to know (a) how this information is taken up when the eye is looking at one place (it is said to be fixating ), and (b) how it is combined with information from previous and future fixation locations. In other words, how does vision integrate information across eye movements?The novelty of this work is manifold. First, we will calibrate video and still cameras to obtain accurate images of natural scenes from which we can work out how human cones at each location would respond when looking at the scene. There has not been a study of the time-varying properties of natural scenes, and we will provide a resource both for this study and other interested researchers. Furthermore, we will study the interplay between fixation and information uptake and storage in human vision, for natural scenes. Finally, we will develop a computational model capable of predicting to what extent human observers will notice differences between two scenes when they move their eyes, and when scenes contain movement. Such a model is useful for many applications, such as measuring whether people will notice errors in the quality of graphics images, and for estimating the degree to which people will notice the presence of camouflaged objects in the scene.
|
Key Findings |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Potential use in non-academic contexts |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Impacts |
Description |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk |
Summary |
|
Date Materialised |
|
|
Sectors submitted by the Researcher |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Project URL: |
|
Further Information: |
|
Organisation Website: |
http://www.bris.ac.uk |