EPSRC Reference: |
EP/V060422/1 |
Title: |
Multi-disciplinary Use Cases for Convergent new Approaches to AI explainability |
Principal Investigator: |
D'Onofrio, Professor M |
Other Investigators: |
|
Researcher Co-Investigators: |
|
Project Partners: |
|
Department: |
Physics |
Organisation: |
University of Liverpool |
Scheme: |
Standard Research - NR1 |
Starts: |
01 February 2021 |
Ends: |
30 April 2025 |
Value (£): |
300,019
|
EPSRC Research Topic Classifications: |
|
EPSRC Industrial Sector Classifications: |
No relevance to Underpinning Sectors |
|
|
Related Grants: |
|
Panel History: |
|
Summary on Grant Application Form |
Developing and testing methodologies that allow to interpret the predictions of the AI algorithms in terms of transparency, interpretability, and explainability has become today one of the most important open questions in AI. In this proposal we bring together researchers from different fields with complementary skills, essential to be able to understand the behaviour of the AI algorithms, that will be studied with an interesting set of multidisciplinary use-cases in which explainable AI can play a crucial role and that will be used to quantify strengths and highlight, and possible solve, weaknesses of the available explainable AI methods in different applicative contexts. One aspect hindering so far substantial progress towards explainability is the fact that several proposed solutions in explainable AI proved to be effective after being tailored to specific applications, and frequently not easily transferred to other domains. In this project, we will test the same array of techniques for explainability to use-cases intentionally chosen to be quite heterogeneous with respect to the types of data, learning tasks, scientific questions. The proposed use-cases range from High Energy Physics AI applications, to applied AI in medical imaging, to AI applied for the diagnosis of pulmonary, tracheal and nasal airways diseases, to machine-learning techniques of explainability used to improve analysis and modelling in neuroscience. For each use-case, the research project will consist of three phases. In the first part, we will apply state-of-the-art explainability techniques, properly chosen based on the requirements, to the case under consideration. In the second part, shortcomings of the techniques will be identified. Most notably, issues of scalability to high-dimensional and raw data, where noise can be prevalent compared to the signal of interest, will be taken into consideration, as long as the level of certifiability afforded by each algorithm. In the final phase, algorithms and knowledge built in each use-case will be combined in order to document the results and to develop general procedures and engineering pipelines useful for the exploitation of xAI methods in general and different domains.
|
Key Findings |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Potential use in non-academic contexts |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Impacts |
Description |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk |
Summary |
|
Date Materialised |
|
|
Sectors submitted by the Researcher |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Project URL: |
|
Further Information: |
|
Organisation Website: |
http://www.liv.ac.uk |