EPSRC logo

Details of Grant 

EPSRC Reference: EP/H022236/1
Title: Illuminating Colour Constancy: from Physics to Photography
Principal Investigator: Finlayson, Professor G
Other Investigators:
Researcher Co-Investigators:
Project Partners:
Imsense Ltd International Commission on Illumination
Department: Computing Sciences
Organisation: University of East Anglia
Scheme: Standard Research
Starts: 15 May 2010 Ends: 14 November 2014 Value (£): 643,759
EPSRC Research Topic Classifications:
Image & Vision Computing Vision & Senses - ICT appl.
EPSRC Industrial Sector Classifications:
Creative Industries Information Technologies
Related Grants:
EP/H022325/1
Panel History:
Panel DatePanel NameOutcome
15 Dec 2009 ICT Prioritisation Panel (Dec 09) Announced
Summary on Grant Application Form
In daily life, we depend on colour images which represent the real world, from photographs of key personal events to pictures of possible purchases. In general, these are poor approximations of the real thing. Our aim is to understand better how we perceive colours in the real world, and how to recreate that perception with images. Central to these aims is colour constancy, a fundamental phenomenon which keeps object colours stable even under large changes in the colour of the illumination - we see an apple as red whether it is under bluish daylight or yellowish tungsten light. Camera sensors, which faithfully record the changing light signals, do not naturally possess colour constancy. But digital cameras are often equipped with special colour balancing modules to cope with changes in lighting, and the photographs they produce may be further processed to remove colour casts. In computer vision, such 'color correction' algorithms are necessary to enable machines to use colour as a reliable cue - for example, in automated grading of manufactured goods such as tiles. Human vision and computer vision are typically studied in isolation from each other: the first aims to understand why colours appear as they do to humans, and the other to make them as useful as possible to machines, regardless of how they appear. These two goals are generally not identical, because neither human nor computer colour constancy is perfect.To bridge colour constancy from humans to machines we will perform an innovative set of experiments. First, we will systematically study illuminant metamerism. Metamerism is what makes all image reproduction work: two stimuli with vastly different colour spectra can induce the same colour percept. The light invoking a white percept on a TV has a highly spiky spectrum compared to the flat spectral reflectance of a piece of white paper in daylight. Yet, illuminants which look the same when shining on white paper can sometimes make other surfaces change appearance. We experience this phenomenon when we buy clothes which look good under the artificial shop lights but less satisfactory when we take them outdoors. We will quantify this effect for real scenes under real lights using a new 'tuneable' spectral illuminator with which we can generate any light spectrum. Our second innovation is to make use of newly available High-Dynamic-Range (HDR) displays. In contradistinction to the real world where the brightest point in the scene may be a 100000 times as bright as the darkest point, most displays struggle to produce a dynamic range of even 1000:1 and printed photographs are at most 100:1. Yet we know that colour perception depends on the overall dynamic range of the scene. The new HDR displays can output contrast ratios of 100000:1 and we will use them to measure constancy in lab conditions but with real world brightnesses. A third challenge that we face in making colour photographs match our perception of the real world is the inaccuracy of colour memory. Typically, when we view a photograph, we do not have the real thing to compare it with, but must recall the original scene from memory. The imperfections of our memory then may taint our judgment. It is well known that our memory colours for familiar objects such as sky, grass, and skin tend to be 'over-saturated' -- grass may be remembered as greener and the sky as bluer than they actually are. Thus, when we test colour correction algorithms by asking people which image they prefer, we might find that they do not prefer the one that most accurately reproduces the original scene, but instead matches their imperfect memory. We will quantify these effects of memory and preference. Finally, our research will, at all stages, consider how measured percepts of colour might be predicted by mathematical models. Ultimately, we will design algorithms to automatically see colours as we do, making for better photographs and more useful vision machines.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.uea.ac.uk