EPSRC logo

Details of Grant 

EPSRC Reference: EP/W01081X/1
Title: Computational Agent Responsibility
Principal Investigator: Fisher, Professor M
Other Investigators:
Whittle, Dr A Dennis, Dr L Beebee, Professor H
Researcher Co-Investigators:
Project Partners:
Australian National University (ANU) Florida State University Ludwig Maximilians University of Munich
University of Amsterdam University of Bergen
Department: Computer Science
Organisation: University of Manchester, The
Scheme: Standard Research
Starts: 01 January 2022 Ends: 30 June 2024 Value (£): 642,185
EPSRC Research Topic Classifications:
Artificial Intelligence Fundamentals of Computing
Human-Computer Interactions Robotics & Autonomy
EPSRC Industrial Sector Classifications:
No relevance to Underpinning Sectors
Related Grants:
Panel History:
Panel DatePanel NameOutcome
29 Sep 2021 Trustworthy Autonomous Systems Programme - Responsibility Interview Panel Announced
Summary on Grant Application Form


Engineered systems are becoming more complex and, increasingly, more autonomous. Once they are allowed, or even required, to make their own decisions a wide range of issues concerning safety, ethics and trustworthiness come to the fore. Users are unlikely to trust these systems while regulators are unlikely to even allow their deployment without strong evidence concerning decisions that autonomous systems can make and the subsequent actions they can take.

Our work on the development and analysis of hybrid agent architectures for autonomous systems has shown how strong (formal) verification can be carried out upon core decision-making in systems constructed in a suitable manner. Our approach has also been used to capture and verify certain "ethical" decisions the autonomous system might make, which are especially important when the system is confronted by critical decisions in unforeseen situations and must decide on good/bad actions.

However, it has become clear that simple versions of ethical principles, such as good/bad or right/wrong, are insufficient and that we need stronger concepts of "responsibility" in practice. Such issues are well-studied in Philosophy, though mainly through the lens of human morality or legal accountability. In this project we aim to identify and develop suitable notions of "responsibility" that are consistent with views from Philosophy but that can also be used within computational agents at the heart of our autonomous systems. In doing this, we pave the way for formal verification of responsibility, sophisticated explanations and, crucially, the use of responsibilities as a driver for agent decisions and actions. Thus, our central aim is to devise a framework for autonomous systems responsibility that is philosophically justifiable, effectively implementable, and practically verifiable.

Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.man.ac.uk