EPSRC logo

Details of Grant 

EPSRC Reference: EP/G062609/1
Title: A multichannel adaptive integrated MEMS/CMOS microphone
Principal Investigator: Smith, Professor LS
Other Investigators:
Researcher Co-Investigators:
Project Partners:
QinetiQ
Department: Computing Science and Mathematics
Organisation: University of Stirling
Scheme: Standard Research
Starts: 08 March 2010 Ends: 07 September 2014 Value (£): 353,817
EPSRC Research Topic Classifications:
Electronic Devices & Subsys. VLSI Design
EPSRC Industrial Sector Classifications:
No relevance to Underpinning Sectors
Related Grants:
EP/G063710/1
Panel History:
Panel DatePanel NameOutcome
24 Apr 2009 ICT Prioritisation Panel (April 09) Announced
Summary on Grant Application Form
There are many different types of microphones: their primary function is transduction: converting pressure waves (within some range of frequencies) into a single electrical signal, usually as precisely as possible. After this, the signal may be used for recording or for interpretation (which is our interest here). A major problem in interpretation is that the signal may have a large amount of energy in some parts of the auditory spectrum, but much less in others, and that this distribution may alter rapidly. Often, it is the energy in these lower energy areas that is critical for interpretation. Current practice is to filter the single electrical signal from the microphone (whether using FFTs, or bandpass filters), then examine the signal so produced. We propose a different approach in which the pressure wave is directly transduced into multiple electrical signals, corresponding to different parts of the audible spectrum. By making the transducers active (i.e. providing them with a rapidly adjusting gain control), we will be able to increase the sensitivity of the filters in those areas where additional sensitivity can be useful in the interpretation task, and reduce the sensitivity in those areas where the signal is very strong. The auditory interpretation tasks undertaken by animals (solving the what and where tasks when there are - as is normally the case - multiple sound sources in a reverberant environment) is the same task that an autonomous robot's auditory system needs to undertake. Animal hearing systems include multiple transducers, and provide numerous outputs for different parts of the spectrum, whilst adjusting their sensitivity and selectivity dynamically. Current microphones provide a single electrical output, which is then either processed into a number of bandpass streams (maintaining precise timing), or into a sequence of FFT-based vectors, such as cepstral coefficients (losing timing precision). The proposed active MEMS microphone performs the spectral breakdown at transduction, providing an inherently parallel output whilst maintaining precise timing. Further, it is adaptive. This adaptive capability, non-existent in current microphones is important in hearing aids. Precise timing information is important for source direction identification using inter-aural time and level differences. Where there are multiple active sources, accurate foreground source interpretation requires some degree of sound streaming, requiring the ability to examine features of the sound, often in spectral areas which with relatively low energy.The active MEMS bandpassing microphone will consist of a membrane which will vibrate due to the external pressure wave. The membrane is physically linked to different resonant elements (bars) in the MEMS structure - these elements will have a range of resonant frequencies. Further, these bars will act as gates for MOS transistors, resulting in their vibration modulating the current passing through these transistors. The modulated current will be coded as a set of sequences of spikes, and these spikes processed to provide a signal to adjust the sensitivity of each of the resonators by using an electrostatic effect to change the response of the transistors to the vibration of the bars. The modulation will be used to adjust the gain so that quiet areas of the spectrum are selectively amplified and loud areas of the spectrum selectively attenuated. In this way, it will be possible to build an integrated MEMS/CMOS microphone which can attenuate loud areas of the spectrum concurrently with amplifying quiet areas of the spectrum. The spike coded output will be made available in a way compatible with the address-event representation (AER), making it compatible with existing and proposed neuromorphic chips form other laboratories.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.stir.ac.uk