EPSRC logo

Details of Grant 

EPSRC Reference: EP/M026957/1
Title: Machine Learning for Hearing Aids: Intelligent Processing and Fitting
Principal Investigator: Turner, Dr RE
Other Investigators:
Moore, Professor BCJ
Researcher Co-Investigators:
Project Partners:
Cambridge University Hospitals Trust Chear Medical Research Council (MRC)
Phonak Hearing Systems
Department: Engineering
Organisation: University of Cambridge
Scheme: Standard Research - NR1
Starts: 01 December 2015 Ends: 31 May 2019 Value (£): 565,347
EPSRC Research Topic Classifications:
Digital Signal Processing Music & Acoustic Technology
Vision & Senses - ICT appl.
EPSRC Industrial Sector Classifications:
Healthcare
Related Grants:
Panel History:
Panel DatePanel NameOutcome
10 Mar 2015 Hearing Aid Technologies Announced
Summary on Grant Application Form
Current hearing aids suffer from two major limitations:

1) hearing aid audio processing strategies are inflexible and do not adapt sufficiently to the listening environment,

2) hearing tests and hearing aid fitting procedures do not allow reliable diagnosis of the underlying nature of the hearing loss and frequently lead to poor fitting of devices.

This research programme will use new machine learning methods to revolutionise both of these aspects of hearing aid technology, leading to intelligent hearing devices and testing procedures which actively learn about a patient's hearing loss enabling more personalised fitting.

Intelligent audio processing

The optimal audio processing strategy for a hearing aid depends on the acoustic environment. A conversation held in a quiet office, for example, should be processed in a different way from one held in a busy reverberant restaurant. Current high-end hearing aids do switch between a small number of different processing strategies based upon a simple acoustic environment classification system that monitors simple aspects of the incoming audio. However, the classification accuracy is limited, which is one of the reasons why hearing devices perform very poorly in noisy multi-source environments. Future intelligent devices should be able to recognise a far larger and more diverse set of audio environments, possibly using wireless communication with a smart phone. Moreover, the hearing aid should use this information to inform the way the sound is processed in the hearing aid. The purpose of the first arm of the project is to develop algorithms that will facilitate the development of such devices.

One of the focuses will be on a class of sounds called audio textures, which are richly structured, but temporally homogeneous signals. Examples include: diners babbling at a restaurant; a train rattling along a track; wind howling through the trees; water running from a tap. Audio textures are often indicative of the environment and they therefore carry valuable information about the scene that could be harnessed by a hearing aid. Moreover, textures often corrupt target signals and their suppression can help the hearing impaired. We will develop efficient texture recognition systems that can identify the noises present in an environment. Then we will design and test bespoke real-time noise reduction strategies that utilise information about the audio textures present in the environment.

Intelligent hearing devices

Sensorineural hearing loss can be associated with many underlying causes. Within the cochlea there may be dysfunction of the inner hair cells (IHCs) or outer hair cells (OHCs), metabolic disturbance, and structural abnormalities. Ideally, audiologists should fit a patient's hearing aid based on detailed knowledge of the underlying cause of the hearing loss, since this determines the optimal device settings or whether to proceed with the intervention at. Unfortunately, the hearing test employed in current fitting procedures, called the audiogram, is not able to reliably distinguish between many different forms of hearing loss.

More sophisticated hearing tests are needed, but it has proven hard to design them. In the second arm of the project we propose a different approach that refines a model of the patient's hearing loss after each stage of the test and uses this to automatically design and select stimuli for the next stage that are particularly informative. These tests will be be fast, accurate and capable of determining the form of the patient's specific underlying dysfunction. The model of a patient's hearing loss will then be used to setup hearing devices in an optimal way, using a mixture of computer simulation and listening test.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.cam.ac.uk