EPSRC logo

Details of Grant 

EPSRC Reference: EP/X015971/1
Title: XAdv: Robust Explanations for Malware Detection
Principal Investigator: Pierazzi, Dr F
Other Investigators:
Researcher Co-Investigators:
Project Partners:
Avast Software s.r.o. Kings College London NCC Group
Technical Univ of Braunschweig (replace) Uni of Illinois at Urbana Champaign University of Cagliari
Department: Informatics
Organisation: Kings College London
Scheme: New Investigator Award
Starts: 01 October 2023 Ends: 30 September 2026 Value (£): 315,128
EPSRC Research Topic Classifications:
Software Engineering
EPSRC Industrial Sector Classifications:
No relevance to Underpinning Sectors
Related Grants:
Panel History:
Panel DatePanel NameOutcome
24 Jan 2023 EPSRC ICT Prioritisation Panel January 2023 Announced
Summary on Grant Application Form
Malware (short for "malicious software") refers to any software that perform malicious activities, such as stealing information (e.g., spyware) and damaging systems (e.g., ransomware). Malware authors constantly update their attack strategies to evade detection of antivirus systems, and automatically generate multiple variants of the same malware that are harder to recognize than the original. Traditional malware detection methods relying on manually defined patterns (e.g., sequences of bytes) are time consuming and error prone. Hence, academic and industry researchers have started exploring how Machine Learning (ML) can help in detecting new, unseen malware types. In this context, explaining ML decisions is fundamental for security analysts to verify correctness of a certain decision, and develop patches and remediations faster. However, it has been shown that attackers can induce arbitrary, wrong explanations in ML systems; this is achieved by carefully modifying a few bytes of their malware.

This project, XAdv ("X" for explanation, and "Adv" for adversarial robustness), aims to design "robust explanations" for malware detection, i.e., explanations of model decisions which are easy to understand and visualize for security analysts (to support faster verification of maliciousness, and development of patches), and which are trustworthy and reliable even in presence of malware evolution over time and evasive malware authors.

Moreover, this project will explore how robust explanations can be used to automatically adapt ML-based malware detection models to new threats over time, as well as to integrate domain knowledge from security analysts' feedback from robust explanations to improve detection accuracy.

Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: