Search this site
Search this site
Home
GoW Home
Back
Research Areas
Topic
Sector
Scheme
Region
Theme
Organisation
Partners
Details of Grant
EPSRC Reference:
EP/F037384/1
Title:
Analysing Dynamic Change in Faces
Principal Investigator:
McOwan, Professor PW
Other Investigators:
Researcher Co-Investigators:
Project Partners:
Department:
Computer Science
Organisation:
Queen Mary University of London
Scheme:
Standard Research
Starts:
01 June 2008
Ends:
30 November 2011
Value (£):
287,392
EPSRC Research Topic Classifications:
Vision & Senses - ICT appl.
EPSRC Industrial Sector Classifications:
Creative Industries
Related Grants:
EP/F037503/1
Panel History:
Panel Date
Panel Name
Outcome
06 Dec 2007
ICT Prioritisation Panel (Technology)
Announced
Summary on Grant Application Form
Humans are very good at understanding and interpreting the motion of others peoples faces. We can effortlessly recognise emotions and interpret subtle facial behaviors such as sardonic smiles, thoughtful frowns or questioning looks, but the question remains, how do we do this? We need new computer based tools to be able to explore this fascinating area of psychology. In this project we will develop a new form of three-dimensional camera system that will allow us to record the movements of people's faces and then process this video information to discover the components of movements that go to make them up. Once we are able to discover the parts of movements that add together to make familiar facial expression we can use this to be able to create new faces; in much the same way as a music mixing desk allows you to blend together different sounds, we will have software that allows us to mix new faces with whatever expressions we select. Using this new tool we can then carry out experiments to look at how we process faces and imitate other people's facial movement. We will examine how observing the movement in one persons face can be translated into movements of our own face to imitate the action. Because the faces we use are created in the computer we can manipulate them in any way we like. This new technology will allow us to address a large set of basic questions. Can we imitate a person if the face seen only from the side or if it is shown upside down? Do we do better when we imitate our self, a friend or a stranger? We can even create caricatures of faces, where we exaggerate particular movements, to evaluate how these facial gestures are represented in the human face processing system. A better understanding of how imitation works will help us understand social behaviors and their development, and also help in developing computer systems that can both recognise and react to our facial expressions. The new face mixing software will also have commercial applications, for example it can be of use in the computer games and entertainment industry. Movements from one persons face can be used as the instructions to be transferred to create another persons face making the same movement. This will allow for example a voice actor to control the movements of a characters face in addition to simply providing the expressive dialogue, the generation of high quality realistic synthetic actors or faster more efficient ways to video conference over your mobile phone.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:
http://royalsociety.org/summer-science/2011/facial-perception/
Further Information:
Organisation Website: