EPSRC logo

Details of Grant 

EPSRC Reference: EP/M01469X/1
Title: Geometric Evaluation of Stereoscopic Video
Principal Investigator: Hansard, Dr ME
Other Investigators:
Researcher Co-Investigators:
Project Partners:
Department: Sch of Electronic Eng & Computer Science
Organisation: Queen Mary University of London
Scheme: First Grant - Revised 2009
Starts: 01 September 2015 Ends: 31 December 2016 Value (£): 88,495
EPSRC Research Topic Classifications:
Image & Vision Computing Vision & Senses - ICT appl.
EPSRC Industrial Sector Classifications:
Creative Industries
Related Grants:
Panel History:
Panel DatePanel NameOutcome
02 Dec 2014 EPSRC ICT Prioritisation Panel - Dec 2014 Announced
Summary on Grant Application Form


3D films and games have become a popular part of digital entertainment. There are more and more consumer devices that can show 3D, including the new generation of head-mounted displays, which are reawakening the idea of virtual reality. When it works properly, 3D adds a fun and interesting new dimension to the viewing experience. But sometimes there are problems; for example, the scene may look flat, or distorted. Even worse, there may be too much depth, which can eventually cause eye-strain and headaches.

This project is about evaluating the amount of depth that is being shown, so that video producers and game designers can avoid audience discomfort. There are many rules of thumb for setting the right amount of depth, but it would be safer to take a scientific approach. This project will develop the theory of 3D content evaluation, based on the following principles.

In order to see 3D, the images shown to the left and right eye must be slightly different - just as they are when viewing a real scene. The differences between the two images, which are small horizontal offsets, are called binocular disparities. Roughly speaking, the amount of disparity is proportional to the perceived depth. But in order to fully understand the depth/disparity relationship, it is necessary to know something about the human visual system, as well as the 3D display. In particular, we need to know (or guess) where in the scene the viewer is likely to be looking. The project will develop a system that can predict the point of interest, at any given moment. This information can then be used, in conjunction with the binocular disparities, to quantify how much depth is being shown. The video producer or game designer will then be able to modify the content, so that it includes the right amount of 3D.

The project is at the intersection of several research areas, including geometry, signal processing, and visual perception. A prototype software tool will be developed, in conjunction with a UK visual effects company, for the automatic analysis of 3D content. The final system will be tested by asking ordinary people to evaluate a range of 3D videos; if the system can predict the human responses, then it will be useful to video producers and game designers.

Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: