EPSRC logo

Details of Grant 

EPSRC Reference: EP/W011654/1
Title: Making Systems Answer: Dialogical Design as a Bridge for Responsibility Gaps in Trustworthy Autonomous Systems
Principal Investigator: Vallor, Professor S
Other Investigators:
Rovatsos, Professor M Kokciyan, Dr N Vierkant, Dr T
Sethi, Dr N
Researcher Co-Investigators:
Project Partners:
Government of Scotland NHSx SAS Software Limited
Department: Sch of Philosophy Psychology & Language
Organisation: University of Edinburgh
Scheme: Standard Research
Starts: 01 January 2022 Ends: 31 October 2024 Value (£): 559,682
EPSRC Research Topic Classifications:
Artificial Intelligence Cognitive Science Appl. in ICT
Ethics
EPSRC Industrial Sector Classifications:
Financial Services Healthcare
Related Grants:
Panel History:
Panel DatePanel NameOutcome
29 Sep 2021 Trustworthy Autonomous Systems Programme - Responsibility Interview Panel Announced
Summary on Grant Application Form
As computing systems become increasingly autonomous--able to independently pilot vehicles, detect fraudulent banking transactions, or read and diagnose our medical scans--it is vital that humans can confidently assess and ensure their trustworthiness. Our project develops a novel, people-centred approach to overcoming a major obstacle to this, known as responsibility gaps.

Responsibility gaps occur when we cannot identify a person who is morally responsible for an action with high moral stakes, either because it is unclear who was behind the act, or because the agent does not meet the conditions for moral responsibility; for example, if the act was not voluntary, or if the agent was not aware of it. Responsibility gaps are a problem because holding others responsible for what they do is how we maintain social trust.

Autonomous systems create new responsibility gaps. They operate in high-stakes areas such as health and finance, but their actions may not be under the control of a morally responsible person, or may not be fully understandable or predictable by humans due to complex 'black-box' algorithms driving these actions. To make such systems trustworthy, we need to find a way of bridging these gaps.

Our project draws upon research in philosophy, cognitive science, law and AI to develop new ways for autonomous system developers, users and regulators to bridge responsibility gaps-by boosting the ability of systems to deliver a vital and understudied component of responsibility, namely answerability.



When we say someone is 'answerable' for an act, it is a way of talking about their responsibility. But answerability is not about having someone to blame; it is about supplying people who are affected by our actions with the answers they need or expect. Responsible humans answer for actions in many different ways; they can explain, justify, reconsider, apologise, offer amends, make changes or take future precautions. Answerability encompasses a richer set of responsibility practices than explainability in computing, or accountability in law.

Often, the very act of answering for our actions improves us, helping us be more responsible and trustworthy in the future. This is why answerability is key to bridging responsibility gaps. It is not about who we name as the 'responsible person' (which is more difficult to identify in autonomous systems), but about what we owe to the people holding the system responsible. If the system as a whole (machines + people) can get better at giving the answers that are owed, the system can still meet present and future responsibilities to others. Hence, answerability is a system capability for executing responsibilities that can bridge responsibility gaps.

Our ambition is to provide the theoretical and empirical evidence and computational techniques that demonstrate how to enable autonomous systems (including wider "systems" of developers, owners, users, etc) to supply the kinds of answers that people seek from trustworthy agents. Our first workstream establishes the theoretical and conceptual framework that allows answerability to be better understood and executed by system developers, users and regulators. The second workstream grounds this in a people-centred, evidence-driven approach by engaging various publics, users, beneficiaries and regulators of autonomous systems in the research. Focus groups, workshops and interviews will be used to discuss cases and scenarios in health, finance and government that reveal what kinds of answers people expect from trustworthy systems operating in these areas. Finally, our third workstream develops novel computational AI techniques for boosting the answerability of autonomous systems through more dialogical and responsive interfaces with users and regulators. Our research outputs and activities will produce a mix of academic, industry and public-facing resources for designing, deploying and governing more answerable autonomous systems.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.ed.ac.uk