EPSRC logo

Details of Grant 

EPSRC Reference: EP/W00805X/1
Title: Context Aware Augmented Reality for Endonasal Endoscopic Surgery
Principal Investigator: Clarkson, Dr MJ
Other Investigators:
Blandford, Professor A Marcus, Dr H Stoyanov, Professor D
Researcher Co-Investigators:
Project Partners:
Department: Medical Physics and Biomedical Eng
Organisation: UCL
Scheme: Standard Research
Starts: 01 May 2022 Ends: 30 April 2025 Value (£): 1,109,056
EPSRC Research Topic Classifications:
Human-Computer Interactions Image & Vision Computing
Med.Instrument.Device& Equip. Medical Imaging
EPSRC Industrial Sector Classifications:
Healthcare
Related Grants:
Panel History:
Panel DatePanel NameOutcome
12 Aug 2021 Healthcare Technologies Investigator Led Panel Aug 2021 Announced
Summary on Grant Application Form
This project aims to develop tools to guide a surgeon during surgery to remove cancers on the pituitary gland.

Access to the pituitary gland is difficult, and one current approach is the endonasal approach, through the nose. However, while this approach is minimally invasive which is better for the patient, it is technically challenging for the surgeon. It is difficult for the surgeon to manoeuvre the tools, but also difficult for the surgeon to maintain contextual awareness and remember the location of and identify critical structures.

One proposed solution is to combine pre-operative scan data, such as information from Magnetic Resonance Imaging (MRI), or Computed Tomography (CT) scans, and use them in conjunction with the video. Typically, engineers have proposed "Augmented Reality", where the information from MRI/CT scans is simply overlaid on top of the endoscopic video. But this approach has not found favour with clinical teams, and the result is often confusing and difficult to use.

In this project we have assembled a team of surgeons and engineers to re-think the Augmented Reality paradigm from the ground up. First, the aim is to identify the most relevant information to display on-screen at each stage of the operation. Then machine learning will be used to analyse the endoscopic video, and automatically identify which stage of the procedure the surgeon is working on. The guidance system will then automatically switch modes, and provide the most useful information for each stage of the procedure. Finally, we will automate the alignment of pre-operative data to the endoscopic video, using machine learning techniques.

The end result should be more accurate, and more clinically relevant than the current state of the art methods, and represent a genuine step change in performance for image-guidance during skull-base procedures.

Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: