EPSRC logo

Details of Grant 

EPSRC Reference: EP/R032602/1
Title: Towards a multisensory hearing aid: Engineering synthetic audiovisual and audiotactile signals to aid hearing in noisy backgrounds
Principal Investigator: Reichenbach, Professor T
Other Investigators:
Researcher Co-Investigators:
Project Partners:
Google Imperial College Healthcare NHS Trust Oticon A/S
Ruhr University Bochum Sorbonne University (Paris IV & UPMC) UCL
Department: Bioengineering
Organisation: Imperial College London
Scheme: EPSRC Fellowship
Starts: 01 January 2019 Ends: 28 February 2021 Value (£): 1,029,425
EPSRC Research Topic Classifications:
Biomechanics & Rehabilitation Digital Signal Processing
Med.Instrument.Device& Equip. Vision & Senses - ICT appl.
EPSRC Industrial Sector Classifications:
Healthcare Information Technologies
Education
Related Grants:
Panel History:
Panel DatePanel NameOutcome
07 Feb 2018 Engineering Prioritisation Panel Meeting 7 and 8 February 2018 Announced
09 May 2018 EPSRC UKRI CL Innovation Fellowship Interview Panel 8 - 10 and 11 May 2018 Announced
Summary on Grant Application Form
There are more than 10 million people in the U.K., one in six, with some form of hearing impairment. The only assistive technology currently available to them are hearing aids. However, they can only aid people with a particular type of hearing impairment, and hearing aid users still have major problems with understanding speech in noisy backgrounds. A lot of effort has therefore been devoted on signal processing to reduce the background noise in complex sounds, but this has not yet been able to significantly improve speech intelligibility.

The research vision of this project is to develop a radically different technology for assisting people with hearing impairments to understand speech in noisy environments, namely through simplified visual and tactile signals that are engineered from a speech signal and that can be presented congruently to the sound. Visual information such as lip reading can indeed improve speech intelligibility significantly. Haptic information, such as through a listener touching the speakers face, can enhance speech perception as well. However, touching a speakers face in real life is often not an option, and lip reading is often not available such as when a speaker is too far or not in the field of view. Moreover, natural visual and tactile stimuli are highly complex and difficult to substitute when they are not available naturally.

In this project I will engineer simplistic visual and tactile signals from speech that will be designed to enhance the neural response to the rhythm of speech and thereby its comprehension. This builds on recent breakthroughs in our understanding of the neural mechanisms for speech processing. These breakthroughs have uncovered a neural mechanism by which neural activity in the auditory areas of the brain tracks the speech rhythm, set by the rates of syllables and words, and thus parses speech into these functional constituents. Strikingly, this speech-related neural activity can be enhanced by visual and tactile signals, improving speech comprehension. These remarkable visual-auditory and somatosensory-auditory interactions thus open an efficient and non-invasive way of increasing the intelligibility of speech in noise through providing congruent visual and tactile information.

The required visual and tactile stimuli need to be engineered to efficiently drive the cortical response to the speech rhythm. Since the speech rhythm is evident in the speech envelope, a single temporal signal, either from a single channel or a few channels (low density) will suffice for the required visual and tactile signals. They can therefore later be integrated with non-invasive wearable devices such as hearing aids. Because this multisensory speech enhancement will employ existing neural pathways, the developed technology will not require training and will therefore be able to benefit young and elderly people alike.

My specific aims are (1) to engineer synthetic visual stimuli from speech to enhance speech comprehension,

(2) to engineer synthetic tactile stimuli from speech to enhance speech comprehension, (3) to develop a computational model for speech enhancement through multisensory integration, (4) to integrate the engineered synthetic visual and tactile stimuli paired to speech presentation, and (5) to evaluate the efficacy of the developed multisensory stimuli for aiding patients with hearing impairment. I will achieve these aims by working together with six key industrial, clinical and academic partners.

Through inventing and demonstrating a radically new approach to hearing-aid technology, this research will lead to novel, efficient ways for improving speech-in-noise understanding, the key difficulty of people with hearing impairment. The project is excellently aligned with the recently founded Centre for Neurotechnology at Imperial College, as well as more generally with the current major U.S. and E.U. initiatives on brain research.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.imperial.ac.uk