EPSRC Reference: |
EP/H000038/1 |
Title: |
The Spatial Integration and Segmentation of Luminance Contrast in Human Spatial Vision |
Principal Investigator: |
Meese, Professor T |
Other Investigators: |
|
Researcher Co-Investigators: |
|
Project Partners: |
|
Department: |
Sch of Life and Health Sciences |
Organisation: |
Aston University |
Scheme: |
Standard Research |
Starts: |
01 July 2009 |
Ends: |
30 June 2013 |
Value (£): |
622,851
|
EPSRC Research Topic Classifications: |
Vision & Senses - ICT appl. |
|
|
EPSRC Industrial Sector Classifications: |
No relevance to Underpinning Sectors |
|
|
Related Grants: |
|
Panel History: |
Panel Date | Panel Name | Outcome |
24 Apr 2009
|
ICT Prioritisation Panel (April 09)
|
Announced
|
|
Summary on Grant Application Form |
When we open our eyes, we see, without effort. Our visual experience begins with the mechanics of focussing the image on the back of eye; but to make sense of the image-to perceive-our brains must identify the various parts of the image, and understand their relations. Just like a silicon-based computer, the brain performs millions of computations quickly and effectively, more efficiently than we can ever sense. But what are the computations that are needed to recognise, say, your mother; to segment an object from its background; or even appreciate that one part of an image belongs with another? The starting point for this analysis is the distribution of light levels across the retinal image, which we can think of as a set of pixels. Interesting parts of the image (e.g. object boundaries) occur at regions of change: where neighbouring pixels have very different values. These regions are identified by neurons in primary visual cortex (V1) by computing differences between adjacent pixel values to build a neural image of local contrasts: the 'contrast-image'. These contrast-defined local image features are then combined across retinal space at later stages of the visual hierarchy to represent elongated contours (e.g. the branches of a tree) and textured surfaces (e.g. a ploughed field) in what is sometimes known as a 'feature-map'.One major goal in vision science is to construct accurate computer models of the visual system so that computers can be made to process images in the same way as human brains. But there has been a major obstacle. Experiments confirm that feature integration (summing) is involved in constructing the 'feature-map', but also imply that contrast is not summed beyond the neighbourhood of each local contrast processor in V1. But how can local feature representations be summed without also summing the underlying contrast codes?We achieved the breakthrough on this by designing novel images containing patches of contrast distributed over retinal space (Meese & Summers, 2007). These allowed us to measure the contrast integration process while controlling the confounding effects of neural noise and retinal-inhomogeneity that have plagued previous studies. By analysing the relation between visual performance (an observer's probability of detecting the target stimulus) and stimulus contrast, we showed that contrast is summed over substantial regions of the retina after all, but that under normal viewing conditions its effects go unnoticed because of a counterbalancing effect of blanket suppression from a system of contrast gain control. In other words, we have shown that contrast summation is organised very differently from the way first proposed. These results have dispelled orthodoxy and now prompt a thorough re-evaluation of our understanding of contrast and feature integration in human vision.In the current proposed project we will use our new type of stimulus and modelling framework to investigate the computational rules that control the point-by-point integration of information in the 'contrast image'. In particular, our working hypothesis proposes that the visual system does this by maximising the 'signal to noise ratio'. But what directs and limits the signal integration? And how does this relate to the grouping rules of Gestalt psychology and other results on contour integration and contrast perception? Through careful stimulus manipulations, our 19+ experiments will address these issues, mainly using normal healthy observers, but we will also study the disrupted amblyopic visual system as a way of further probing the system's organization. Overall, this work will illuminate the links between pixel-based contrast responses, and later region-based symbolic feature analyses. Only with these links in place can we begin to appreciate how the brain transforms the retinal image to the subjective experience of seeing.
|
Key Findings |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Potential use in non-academic contexts |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Impacts |
Description |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk |
Summary |
|
Date Materialised |
|
|
Sectors submitted by the Researcher |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Project URL: |
|
Further Information: |
|
Organisation Website: |
http://www.aston.ac.uk |