EPSRC logo

Details of Grant 

EPSRC Reference: EP/X020207/1
Title: AI-informed decision making based on decision field theory
Principal Investigator: Tamborrino, Dr M
Other Investigators:
Singh, Dr S Grigoryeva, Dr L
Researcher Co-Investigators:
Project Partners:
Department: Statistics
Organisation: University of Warwick
Scheme: Standard Research - NR1
Starts: 01 October 2022 Ends: 31 July 2024 Value (£): 77,705
EPSRC Research Topic Classifications:
Artificial Intelligence Non-linear Systems Mathematics
Statistics & Appl. Probability
EPSRC Industrial Sector Classifications:
No relevance to Underpinning Sectors
Related Grants:
Panel History:
Panel DatePanel NameOutcome
06 Jul 2022 EPSRC Mathematical Sciences Small Grants Panel July 2022 Announced
Summary on Grant Application Form
Machine learning (ML), Deep Learning (DL) and Artificial Intelligence (AI) have notably contributed to the development of recommender systems (playlist generators for video and music contents, content recommenders for social media and web services platforms, etc.), several types of recognition (e.g. face, image, speech), and self-driving cars, among many others. Using deep neural networks (DNNs), researchers have achieved higher accuracy than human participants in image recognition, have predicted the biomolecular target of a drug or which environmental chemicals are of serious concern to the human health, winning the Merck Molecular Activity Challenge, and the 2014 Tox21 data challenge, respectively.

Despite their success across several fields, there have been a few recent cases where these approaches have drastically failed. For example, take the recent case of Uber's self-driving car that killed a pedestrian or IBM Watson's AI (Watson for Oncology), which gave potentially fatal cancer treatment recommendations. Understanding what went wrong is not an easy task, as explainability remains a core challenge in AI.

The lack of explainability becomes especially crucial whenever AI is used, e.g. by governments, public and private sectors, to make decisions having an impact on human and behavioural sciences in general, since wrong or misleading decisions or the inability to understand their mechanisms may lead to dramatic consequences in many areas (medical treatment, retail and products supply, etc.). To make the results produced by powerful AI tools more interpretable, reliable and accountable, they should explain how and why a particular decision was made, e.g. which attributes were important in the decision making and with which confidence.

There have been several efforts to improve the explainability of AI, most of them focusing on enhancing the explainability and transparency of DNNs, see, e.g. the Policy briefing "Explainable AI: The basic" from the Royal Society (https://royalsociety.org/ai-interpretability). This project contributes to this effort from a different perspective. Our goal is to perform AI-informed decision making driven by Decision Field Theory (DFT), proposing a new set of what we call AI-informed DFT-driven decision-making models. Such models integrate human behaviour with AI by combining stochastic processes coming from DFT with ML tools and have the unique feature of having interpretable parameters. On the one hand, we will generalise the class of DFT models to reproduce characteristics and behaviour of interest and run ML and inferential approaches (mainly likelihood-free based) to estimate the underlying interpretable DFT model parameters. On the other hand, we will use black-box DNN models as proxy (i.e. approximating) models of the interpretable DFT models (with a reversed role with respect to Table 1 of the above-mentioned policy briefing) and use them to learn the processes of interest and make informed predictions (i.e. decisions) driven by DFT. Hence, by using AI to learn these processes, estimating their parameters and making predictions, we will shed light on explaining to the end user why and how a particular decision was made, a crucial feature of interpretable AI-informed decision-making models.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.warwick.ac.uk