EPSRC logo

Details of Grant 

EPSRC Reference: EP/T026995/1
Title: EnnCore: End-to-End Conceptual Guarding of Neural Architectures
Principal Investigator: Cordeiro, Dr LC
Other Investigators:
Lujan, Professor M Freitas, Dr A Mustafa, Dr MA
Brown, Professor G Huang, Professor X
Researcher Co-Investigators:
Project Partners:
Department: Computer Science
Organisation: University of Manchester, The
Scheme: Standard Research
Starts: 01 February 2021 Ends: 31 December 2024 Value (£): 1,721,560
EPSRC Research Topic Classifications:
Artificial Intelligence Fundamentals of Computing
Software Engineering
EPSRC Industrial Sector Classifications:
Information Technologies
Related Grants:
Panel History:
Panel DatePanel NameOutcome
19 Feb 2020 Security for all in an AI enabled society Panel February 2020 Announced
Summary on Grant Application Form
EnnCore will address a fundamental security problem in neural-based (NB) architectures, allowing system designers to specify and verify a conceptual/behavioral hardcore to the system, which can be used to safeguard NB systems against unanticipated behavior and attacks. It will pioneer the dialogue between contemporary explainable neural models and full-stack neural software verification. We will, therefore, develop methods, algorithms and tools to achieve fully-verifiable intelligent systems, which are explainable, whose correct behavior is guaranteed, and that are robust towards attacks.

EnnCore will be validated on three diverse and high-impact application scenarios: (i) securing an AI system for cancer diagnosis (health -- Cancer Research UK, The Christie); (ii) ensuring ethical and legal behavior of a conversational agent (health -- Cancer Research UK, The Christie); and (iii) securing an AI system for demand response (energy -- Urbanchain). The use cases will be co-designed and validated under real-world data conditions with the help of one clinical and one industrial partner.

As a result, EnnCore will address a fundamental research problem to ensure the security of neural-enabled components by taking into account its entire lifecycle from development to deployment. Solving this research problem will have a far-reaching impact on areas such as health care, ethically grounded AI and demand response, which heavily depend on secure and trusted software components to meet safety-critical requirements. Therefore, our overall research objective is to have a long-term impact on writing secure and trusted AI-based software components, thus contributing to a shared vision of fully-verifiable software, where underlying NB-based architectures are built with strong symbolic and mathematical guarantees.

To achieve this objective, EnnCore will design and validate a full-stack symbolic safeguarding system for NB architectures. This project will advance the state-of-the-art in the development of secure Deep Neural Network (DNN) models by mapping, using and extending explainability properties of existing neuro-symbolic DNN architectures (e.g., Graph Networks, Differentiable Inductive Logic Programming), thus safeguarding them with symbolic verification, abstract interpretation and program synthesis methods. EnnCore will pioneer the multi-disciplinary dialogue between explainable DNNs and formal verification.

In particular, EnnCore will deliver safeguarding for safety-critical NB architectures with the following novel properties:

(1) Full-stack symbolic software verification: we will develop the first bit-precise and scalable symbolic verification framework to reason over actual implementations of DNNs, thereby providing further guarantees of security properties concerning the underlying hardware and software, which are routinely ignored in existing literature.

(2) Explainability / Interpretability: EnnCore will pioneer the integration of knowledge-based and neural explainability methods to support end-users specifying security constraints and diagnosing security risks, in order to provide assurances about its security as NB models evolve. Particular attention will be given to the quantitative and qualitative characterization of semantic-drift phenomena in security scenarios.

(3) Scalable: we will systematically combine contemporary symbolic methods for explaining, interpreting and verifying neural representations. In particular, we will develop a neuro-symbolic safeguard framework by linking the structural knowledge-based representation elements to the attentional architecture elements, to achieve scalability and precision in an unprecedented manner. We will also develop new learning techniques for reusing information across different verification runs to reduce formulae size and consistently to improve constraint solving.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.man.ac.uk