EPSRC logo

Details of Grant 

EPSRC Reference: EP/X027732/1
Title: A Human-Trustable Self-Improving Machine Learning Framework for Rapid Disaster Responses Using Satellite Sensor Imagery
Principal Investigator: GU, Dr X
Other Investigators:
Researcher Co-Investigators:
Project Partners:
Aberystwyth University Lancaster University
Department: Computing Science
Organisation: University of Surrey
Scheme: New Investigator Award
Starts: 01 March 2024 Ends: 28 February 2026 Value (£): 267,600
EPSRC Research Topic Classifications:
Software Engineering
EPSRC Industrial Sector Classifications:
No relevance to Underpinning Sectors
Related Grants:
Panel History:
Panel DatePanel NameOutcome
25 Sep 2023 EPSRC ICT Prioritisation Panel Sept 2023 Announced
Summary on Grant Application Form
Due to the abrupt changes in Earth's climate and the dramatic global rise of urbanisation, natural disasters have become unpredictable and caused great social and economic devastation in recent years. According to one published study, between 2015-2019, there were a total of 1624 reported natural disasters, such as earthquakes, floods, landslides, etc., killing on average 60,000 people each year globally. Although humans cannot prevent natural disasters in most cases, timely responses can play a critical role in disaster relief and life-saving. Rapid and accurate building damage assessment (BDA) is required in humanitarian assistance and disaster response to carry out life-saving efforts. However, current BDAs are mostly based on manual inspection and documentation, which is time consuming and labour-intensive.

Although high-resolution satellite sensor images (HRSSIs) such as GeoEye-1, WorldView-2 and 3, have become the major source of first-hand information for BDA, those images often present a mosaic of complex geometrical structures and spatial patterns. Automatic information extraction from HRSSIs of disaster-affected areas is imperative under time-critical situations, and has the potential to facilitate post-disaster assessment, speed up the life-saving rescue processes. However, this remains an extremely challenging task for the state-of-the-art machine learning (ML) algorithms. In practice, human experts have to manually interpret and examine the captured HRSSIs, which involves significant time and labour.

Conventional ML-based BDA methods leverage mainstream classifiers, such as support vector machine, random forest, to generate a damage map based on hand-crafted features extracted from pre- and post- disaster images. However, the complexity and heterogeneity of HRSSIs hinder the applicability of conventional methods, making feature extraction extremely difficult. Besides, buildings often involve only a few pixels, leaving minimal structural information to exploit. Although conventional methods do not require a large volume of training images and are more interpretable, they fail easily on real scenes. On the other hand, deep learning techniques, particularly, deep convolutional neural networks (DCNNs) have reported significant achievements in the field of computer vision and pattern recognition. Some recent studies have explored the capability of DCNNs on BDA and reported promising outcomes under experimental conditions. DCNN-based methods have become increasingly popular and are currently the state-of-the-art in BDA research. However, DCNNs are often characterised as black boxes, and are computationally intensive and data-hungry. As the underlying mechanisms are different from humans and not understandable, DCNNs can fail easily in unfamiliar scenarios due to uncertainties and are often observed to exhibit unexpected behaviours. These disadvantages hinder the practical utilities of DCNN-based BDA methods in real-world scenarios. As a result, emergency management services (EMSs), e.g., the International Charter Space and Major Disasters, still rely on visual interpretation of HRSSIs to assess building damage due to the reliability.

To make ML-based BDA methods reliable for real-world scenarios, this project aims to catalyse a step-change in artificial intelligence by developing highly innovative explainable ML (XML) techniques to automate the BDA processes based on post-disaster HRSSIs. The developed XML techniques will act as a framework for scene understanding, building segmentation and damage assessment on both scene-level and pixel-level in a joint fashion, and have the capacity to self-adapt to different application scenarios in real-time to address real-world uncertainties. By achieving a reliable automated solution to facilitate the highly challenging post-disaster BDA task, we ultimately aim to assist EMSs for faster post-disaster assessment, facilitating life-saving process.

Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.surrey.ac.uk