EPSRC logo

Details of Grant 

EPSRC Reference: EP/X018342/1
Title: Decoding Speech using Invasive Brain-Computer Interfaces based on Intracranial Brain Signals (dSPEECH)
Principal Investigator: Zhang, Dr D
Other Investigators:
Barua, Dr N
Researcher Co-Investigators:
Project Partners:
Brain Products UK Ltd g.tec medical engineering GmbH Huashan Hospital
SpeakUnique Limited
Department: Electronic and Electrical Engineering
Organisation: University of Bath
Scheme: Standard Research - NR1
Starts: 01 January 2023 Ends: 31 December 2024 Value (£): 201,957
EPSRC Research Topic Classifications:
Human Communication in ICT Human-Computer Interactions
EPSRC Industrial Sector Classifications:
Healthcare Information Technologies
Related Grants:
Panel History:
Panel DatePanel NameOutcome
21 Jun 2022 New Horizons 2021 Full Proposal Panel Announced
22 Jun 2022 New Horizons People and Interactivity Panel June 2022 Announced
Summary on Grant Application Form
Some patients cannot speak because of impairment or degeneration of neural pathways recruited in speech production or movement such as motor neurone disease (MND) or amyotrophic lateral sclerosis (ALS). However, brain-computer interfaces (BCIs) may benefit them if their brain structure responsible for language or cognition is intact, as BCIs have the potential to bypass damaged neural pathways by decoding brain signals into speech directly.

BCIs may use invasive or non-invasive ways for brain signal recording. In general, the brain signals recorded by non-invasive BCIs such as electroencephalography (EEG) is of poor quality with low signal-to-noise ratio, so non-invasive BCIs cannot decode speech with acceptable performance at present. Differently, invasive BCIs such as electrocorticography (ECoG) and stereo-electroencephalography (SEEG) can collect high-quality intracranial brain signals with adequate spatial and temporal resolution, so it is promising to use invasive BCIs to decode speech.

Through overt speech using invasive BCIs (ECoG and SEEG) have been developed rapidly and many excellent results are generated in recent years, the covert (imagined) speech decoding is still challenging. There are several reasons behind this situation. The first and major reason is because the associated neural signals are weak and variable compared to overt speech, hence it is very difficult to decode covert speech by machine learning algorithms The second reason is the limited imagined speech dataset recorded invasively. This limited dataset cannot be relieved using an animal model because animals use a system of communication that is believed to be limited to expression of a finite number of utterances that is mostly determined genetically. In addition, recordings on humans are generally restricted to non-invasive techniques. Intracranial data can only be obtained in a clinical environment from patients with drug resistant epilepsy or other neural related conditions. Inclusion criteria, such as normal cognition and the ability to articulate, for speech study further decrease the number of potential subjects. This project (dSPEECH) aims to make some breakthrough regarding the above factors and do a novel study.

In dSPEECH, we are ambitious to decode covert speech using invasive BCIs (ECoG and SEEG). We will establish new paradigms for a new generation of brain-to-text BCIs, develop advanced machine learning/deep learning algorithms for decoding covert speech, and construct the world's first ECoG/SEEG dataset for covert speech. We will also tackle the possible problems on ethical issues and data management. With the available of such ECoG/SEEG dataset and the proper decoding methods, we are confident to make big progress on research of decoding covert speech.

dSPEECH is a joint project that comprises of multidisciplinary members including researchers from neural engineering and neurosurgeons from clinical medicine. We also have got strong support from partners including famous BCI companies and oversea experienced neurosurgeons. Based on the close and solid collaboration, we believe the world-leading results can be generated in dSPEECH.

Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.bath.ac.uk