EPSRC Reference: |
EP/W004275/1 |
Title: |
Transforming hearing aids through large-scale electrophysiology and deep learning |
Principal Investigator: |
Lesica, Professor N |
Other Investigators: |
|
Researcher Co-Investigators: |
|
Project Partners: |
|
Department: |
Ear Institute |
Organisation: |
UCL |
Scheme: |
Standard Research |
Starts: |
01 September 2022 |
Ends: |
31 August 2025 |
Value (£): |
840,504
|
EPSRC Research Topic Classifications: |
Med.Instrument.Device& Equip. |
Vision & Senses - ICT appl. |
|
EPSRC Industrial Sector Classifications: |
|
Related Grants: |
|
Panel History: |
Panel Date | Panel Name | Outcome |
16 Nov 2021
|
EPSRC ICT Prioritisation Panel November 2021
|
Announced
|
|
Summary on Grant Application Form |
Hearing loss affects approximately 500 million people worldwide (11 million in the UK), making it the fourth leading cause of years lived with disability (third in the UK). The resulting burden imposes enormous personal and societal consequences. By impeding communication, hearing loss leads to social isolation and associated decreases in quality of life and wellbeing. It has also been identified as the leading modifiable risk factor for incident dementia and imposes a substantial economic burden, with estimated costs of more than £30 billion per year in the UK.
As the impact of hearing loss continues to grow, the need for improved treatments is becoming increasingly urgent. In most cases, the only treatment available is a hearing aid. Unfortunately, many people with hearing aids don't actually use them, partly because current devices, which are little more than simple amplifiers, often provide little benefit in social settings with high sound levels and background noise. Thus, there is a huge unmet clinical need with around three million people in the UK living with an untreated, disabling hearing loss. The common complaint of those with hearing loss, "I can hear you, but I can't understand you", is echoed by hearing aid users and non-users alike. Inasmuch as the purpose of a hearing aid is to facilitate communication and reduce social isolation, devices that do not enable the perception of speech in typical social settings are fundamentally inadequate.
The idea that hearing loss can be corrected by amplification alone is overly simplistic; while hearing loss does decrease sensitivity, it also causes a number of other problems that dramatically distort the information that the ear sends to the brain. To improve performance, the next generation of hearing aids must incorporate more complex sound transformations that correct these distortions. This is, unfortunately, much easier said than done. In fact, engineers have been attempting to hand-design hearing aids with this goal in mind for decades with little success.
Fortunately, recent advances in experimental and computational technologies have created an opportunity for a fundamentally different approach. The key difficulty in improving hearing aids lies in the fact that there are an infinite number of ways to potentially transform sounds and we do not understand the fundamentals of hearing loss well enough to infer which transformations will be most effective. However, modern machine learning techniques will allow us to bypass this gap in our understanding; given a large enough database of sounds and the neural activity that they elicit with normal hearing and hearing impairment, deep learning can be used to identify the sound transformations that best correct distorted activity and restore perception as close to normal as possible.
The required database of neural activity does not yet exist, but we have spent the past few years developing the recording technology required to collect it. This capability is unique; there are no other research groups in the world that can make these recordings. We have already demonstrated the feasibility of solving the machine learning problem in silico. We are now proposing to collect the large-scale database of neural activity required to fully develop a working prototype of a new hearing aid algorithm based on deep neural networks and to demonstrate its efficacy for people with hearing loss.
|
Key Findings |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Potential use in non-academic contexts |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Impacts |
Description |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk |
Summary |
|
Date Materialised |
|
|
Sectors submitted by the Researcher |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Project URL: |
|
Further Information: |
|
Organisation Website: |
|