EPSRC logo

Details of Grant 

EPSRC Reference: EP/S028730/1
Title: Future Colour Imaging
Principal Investigator: Finlayson, Professor G
Other Investigators:
Researcher Co-Investigators:
Project Partners:
Apple, Inc. Earlham Institute Pompeu Fabra University
Simon Fraser University Society for Imaging Science & Technology Spectral Edge Ltd
THOUSLITE (Thousand Lights Lighting) University of Bradford University of Cambridge
University of Essex University of Leeds University of Manchester, The
York University Canada Zhejiang University
Department: Computing Sciences
Organisation: University of East Anglia
Scheme: EPSRC Fellowship
Starts: 01 September 2019 Ends: 30 June 2025 Value (£): 1,046,725
EPSRC Research Topic Classifications:
Image & Vision Computing Vision & Senses - ICT appl.
EPSRC Industrial Sector Classifications:
Creative Industries Information Technologies
Related Grants:
Panel History:
Panel DatePanel NameOutcome
17 Jan 2019 EPSRC ICT Prioritisation Panel January 2019 Announced
28 Feb 2019 ICT and DE Fellowship Interviews 28 February 2019 Announced
Summary on Grant Application Form
Colour Imaging is part of every day life. Whether we watch TV, browse content on our tablets or phones or use apps and software in our work the content we see on our screens is the result of decades of colour & imaging research.

In the future, the challenge is to understand more about the content images. As an example, in autonomous driving we wish to build a platform that sees the road independent of the atmospheric conditions, we don't want to crash when we are driving in fog. It is well known that an image that records the near-infrared signal is much sharper (compared to RGB) in foggy conditions. What is near infrared? The visible spectrum has a natural rainbow order: Violet, Indigo, Blue, Green, Yellow Orange and Red. Infrared is the 'next colour' after red that we can't quite see. Image fusion can be used to map the RGB+NIR signal to a fused RGB counterpart, that we can see. Through image fusion the same detail will be present in foggy or non-foggy conditions. Advantageously, Image Fusion is a tool that will allow non visible information to be incorporated and deployed in existing RGB-based AI scene interpretation systems with minimal retraining.

Our project begins with the Spectral Edge Image fusion method, the current leading technique. This method - and most image fusion algorithms - works by combining edges from the 4 images (RGB+NIR) to make a fused RGB-only 3-channel edge map. The edges are then transformed (the technical term is reintegrated) back to form a colour image. Unfortunately, and necessarily, the reintegrated images often have defects such as bright halos round edges or smearing. We argue that the defects are a direct consequence of how 'edges' are defined. In our research we will - based on a surprising mathematical insight - develop a new definition of edge, quite a bold thing to do after 50 years of image processing research! By construction the reintegrated new edges will have much less halo and smearing artefacts.

We will then use our improved edge representation and improved image fusion algorithm to make better looking images. These might be the fused images themselves: wouldn't it be great to have smart binoculars that allow us to see more detail in images when it is rainy or a landscape that is blurred by distance. However, we also believe the future of photography, in general, is content-based and that image fusion will help us determine the content in an image. As an example, when we take a picture at sunset, the shadows in the scene are very blue. But, outside of the shadow the light is very warm (orangish). The best image reproductions for these scenes involves manually and differentially processing shadow and non shadow regions. Here, we seek to find the illumination content in image automatically. Then in a second step we will develop a new content-based framework for manipulating images so that, for this sunset example, we don't need to edit the photos ourselves.

In complementary work, we are also interested in helping people see better. Indeed, there is a lot of research that demonstrates that coloured filters can help mitigate visual stress. Coloured filters are used in Dyslexia (sometimes leading to dramatic improvements in reading speed) and there is now blue absorbing glass which will reduces the blue light coming from a tablet display (since blue light at night tends to keep you awake). Much of the prior art in this area is 'direct'. We find a filter to directly impact on how we see (simply, if we put a yellow filter in front of the eye then everything looks more yellow). Our idea is to deign filters that are related to the tasks we need to solve. For the problem of matching colours we will design filters so that if you suffer from colour-blindness you will be able to colour match as if you had normal colour vision. We will also develop indirect solutions for the 'blue light' problem and visual stress.

Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.uea.ac.uk