EPSRC Reference: |
EP/S03580X/1 |
Title: |
Environment and Listener Optimised Speech Processing for Hearing Enhancement in Real Situations (ELO-SPHERES) |
Principal Investigator: |
Huckvale, Professor MA |
Other Investigators: |
|
Researcher Co-Investigators: |
|
Project Partners: |
|
Department: |
Speech Hearing and Phonetic Science |
Organisation: |
UCL |
Scheme: |
Standard Research |
Starts: |
01 October 2019 |
Ends: |
31 March 2023 |
Value (£): |
554,977
|
EPSRC Research Topic Classifications: |
Artificial Intelligence |
Biomechanics & Rehabilitation |
Computer Graphics & Visual. |
Music & Acoustic Technology |
Vision & Senses - ICT appl. |
|
|
EPSRC Industrial Sector Classifications: |
No relevance to Underpinning Sectors |
|
|
Related Grants: |
|
Panel History: |
Panel Date | Panel Name | Outcome |
02 May 2019
|
EPSRC ICT Prioritisation Panel May 2019
|
Announced
|
|
Summary on Grant Application Form |
Although modern hearing aids offer the potential to exploit advanced signal processing techniques, the experience and capabilities of hearing impaired listeners are still unsatisfactory in many everyday listening situations. In part this is because hearing aids reduce or remove subtle differences in the signals received at the two ears. The normally-hearing auditory system uses such differences to determine the location of sound sources in the environment, separate wanted from unwanted sounds and allow attention to be focused on a particular talker in a noisy, multi-talker environment.
True binaural hearing aids - in which the sound processing that takes place in the left and right ears is coordinated rather than independent - are just becoming available. However practitioners' knowledge of how best to match their potential to the requirements of impaired listeners and listening situations is still very limited. One significant problem is that typical existing listening tests do not reflect the complexity of real-world listening situations, in which there may be many sound sources which may move around and listeners move their heads and also use visual information. A second important issue is that HI listeners vary widely in their underlying spatial hearing abilities.
This project aims to understand better the problems of hearing impaired listeners in noisy, multiple-talker conversations, particularly with regard to (i) their abilities to attend to and recognise speech coming from different directions while listening through binaural aids, (ii) their use of audio-visual cues. We will develop new techniques for coordinated processing of the signals arriving at the different ears that will allow identification of the locations and characteristics of different sound sources in complex environments and tailor the information presented to match the individual listener's pattern of hearing loss. We will build virtual reality simulations of complex listening environments and develop audio-visual tests to assess the abilities of listeners. We will investigate how the abilities of hearing-impaired listeners vary with their degree of impairment and the complexity of the environment.
This research project is a timely and focussed addition to knowledge and techniques to realise the potential of binaural hearing aids. Its outcomes will provide solutions to some key problems faced by hearing aid users in noisy, multiple-talker situations.
|
Key Findings |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Potential use in non-academic contexts |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Impacts |
Description |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk |
Summary |
|
Date Materialised |
|
|
Sectors submitted by the Researcher |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Project URL: |
|
Further Information: |
|
Organisation Website: |
|