EPSRC logo

Details of Grant 

EPSRC Reference: EP/S001816/1
Title: Dynamically Accurate Avatars
Principal Investigator: Taylor, Dr S
Other Investigators:
Researcher Co-Investigators:
Project Partners:
Emteq Ltd FaceMe FXhome Limited
SyncNorwich The Foundry Visionmongers Ltd (UK)
Department: Computing Sciences
Organisation: University of East Anglia
Scheme: EPSRC Fellowship - NHFP
Starts: 29 June 2018 Ends: 04 May 2022 Value (£): 557,531
EPSRC Research Topic Classifications:
Computer Graphics & Visual. Human Communication in ICT
EPSRC Industrial Sector Classifications:
Creative Industries
Related Grants:
Panel History:
Panel DatePanel NameOutcome
08 May 2018 EPSRC UKRI CL Innovation Fellowship Interview Panel 5 - 8 and 9 May 2018 Announced
Summary on Grant Application Form
Our bodies move as we speak. Evidently, movement of the jaw, lips and tongue is required to produce coherent speech. Furthermore, additional body gestures both synchronise with the voice and significantly contribute to speech comprehension. For example, a person's eyebrows raise when they are stressing a point, their head shakes when they disagree and a shrug might express doubt.

The goal is to build a computational model that learns the relationship between speech and upper body motion so that we can automatically predict face and body posture for any given audio speech. The predicted body pose can be transferred to computer graphics characters, or avatars, to automatically create character animation directly from speech, on the fly.

A number of approaches have previously been used for mapping from audio to facial motion or head motion, but the limited amount of speech and body motion data that is available has hindered progress. Our research programme will use a field of machine learning called transfer learning to overcome this limitation.

Our research will be used to automatically and realistically animate the face and upper body of a graphics character along with a user's voice in real time. This is valuable for a) controlling the body motion of avatars in multiplayer online gaming, b) driving a user's digital presence in virtual reality (VR) scenarios, and c) automating character animation in television and film production. The work will enhance the realism of avatars during live interaction between users in computer games and social VR without the need for full body tracking. Additionally, we will significantly reduce the time required to produce character animation by removing the need for expensive and time-consuming hand-animation or motion capture.

We will develop novel artificial intelligence approaches to build a robust speech-to-body motion model. For this, we will design and collect a video and motion capture dataset of people speaking, and this will be made publicly available.

The project team is comprised of Dr. Taylor and a PDRA at the University of East Anglia, Norwich, UK.

Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.uea.ac.uk