EPSRC logo

Details of Grant 

EPSRC Reference: EP/L000539/1
Title: S3A: Future Spatial Audio for an Immersive Listener Experience at Home
Principal Investigator: Hilton, Professor A
Other Investigators:
Nelson, Professor P Cox, Professor TJ Fazi, Dr FM
Researcher Co-Investigators:
Project Partners:
Bang & Olufsen BBC Codemasters
DTS Inc Electronic Arts KEF Audio (UK) Ltd
NHK Science & Technology Research Labs Orbitsound Limited Sony (UK)
Department: Vision Speech and Signal Proc CVSSP
Organisation: University of Surrey
Scheme: Programme Grants
Starts: 12 December 2013 Ends: 12 June 2019 Value (£): 5,415,204
EPSRC Research Topic Classifications:
Digital Signal Processing Image & Vision Computing
Music & Acoustic Technology
EPSRC Industrial Sector Classifications:
Creative Industries
Related Grants:
Panel History:
Panel DatePanel NameOutcome
12 Sep 2013 Programme Grant Interviews (ICT) - 12 September 2013 Announced
Summary on Grant Application Form
3D sound can offer listeners the experience of "being there" at a live event, such as the Proms or Olympic 100m, but

currently requires highly controlled listening spaces and loudspeaker setups. The goal of S3A is to realise practical

3D audio for the general public to enable immersive experiences at home or on the move.

Virtually the whole of the UK population consume audio. S3A aims to unlock the creative potential of 3D sound and deliver to listeners a step change in immersive experiences. This requires a radical new listener centred approach to audio enabling 3D sound production to dynamically adapt to the listeners' environment. Achieving immersive audio experiences in uncontrolled living spaces presents a significant research challenge. This requires major advances in our understanding of the perception of spatial audio together with new representations of audio and the signal processing that allows content creation and perceptually accurate reproduction. Existing audio production formats (stereo, 5.1) and those proposed for future cinema spatial audio (24,128) are channel-based requiring specific controlled loudspeaker arrangements that are simply not practical for the majority of home listeners. S3A will pioneer a novel object-based methodology for audio signal processing that allows flexible production and reproduction in real spaces. The reproduction will be adaptive to loudspeaker configuration, room acoustics and listener locations. The fields of audio and visual 3D scene understanding will be brought together to identify and model audio-visual objects in complex real scenes. Audio-visual objects are sound sources or events with known spatial properties of shape and location over time, e.g. a football being kicked, a musical instrument being played or the crowd chanting at a football match. Object based representation will transform audio production from existing channel based signal mixing (stereo, 5.1, 22.2) to spatial control of isolated sound sources and events. This will realise the creative potential of 3D sound enabling intelligent user-centred content production, transmission and reproduction of 3D audio content in platform independent formats. Object-based audio will allow flexible delivery (broadcast, IP and mobile) and adaptive reproduction of 3D sound to existing and new digital devices.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.surrey.ac.uk