EPSRC Reference: |
EP/K007491/1 |
Title: |
Multisource audio-visual production from user-generated content |
Principal Investigator: |
Cavallaro, Professor A |
Other Investigators: |
|
Researcher Co-Investigators: |
|
Project Partners: |
|
Department: |
Sch of Electronic Eng & Computer Science |
Organisation: |
Queen Mary University of London |
Scheme: |
Standard Research |
Starts: |
08 February 2013 |
Ends: |
31 January 2016 |
Value (£): |
362,889
|
EPSRC Research Topic Classifications: |
Digital Signal Processing |
Image & Vision Computing |
|
EPSRC Industrial Sector Classifications: |
|
Related Grants: |
|
Panel History: |
Panel Date | Panel Name | Outcome |
04 Sep 2012
|
EPSRC ICT Responsive Mode - Sep 2012
|
Announced
|
|
Summary on Grant Application Form |
The pervasiveness of amateur media recorders either embedded in smartphones or as stand-alone devices is revolutionizing the way events are captured and reported. The aim of this project is to devise intelligent editing and production algorithms based on new signal processing techniques for processing multi-view user-generated content.
The explosion of shared video content offers the opportunity for new ways of not only analysing but also timely reporting stories, ranging from disaster scenes and protests to music concerts and sports events. However, the large amount of data increasingly available and their varying quality makes the selection and editing of appropriate multimedia items in a timely manner very difficult thus strongly limiting the opportunity to harvest this data for security, cultural and entertainment applications. There is an urgent need to investigate and develop new ways to help or replace what used to be the role of a producer/director in this rapidly changing landscape. In particular, there is the need to automate production tasks and to generate new and high-quality content from multiple views.
The key aspect of the project is the integration of audio and visual inputs that support each other in reaching objectives that would otherwise be impossible using only one modality. We will focus on a set of relevant event-types: sports, music shows and crowd scenes. We will devise novel multisource processing techniques to improve audiovisual production and to enable synchronisation processing. This will in turn allow generation of novel and higher quality audio-visual rendering of captured events.
|
Key Findings |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Potential use in non-academic contexts |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Impacts |
Description |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk |
Summary |
|
Date Materialised |
|
|
Sectors submitted by the Researcher |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Project URL: |
|
Further Information: |
|
Organisation Website: |
|