EPSRC logo

Details of Grant 

EPSRC Reference: EP/L024845/1
Title: Verifiable Autonomy
Principal Investigator: Fisher, Professor M
Other Investigators:
Researcher Co-Investigators:
Project Partners:
MIRA (Motor Industry Research Assoc.)
Department: Computer Science
Organisation: University of Liverpool
Scheme: Standard Research
Starts: 08 September 2014 Ends: 07 March 2019 Value (£): 640,791
EPSRC Research Topic Classifications:
Artificial Intelligence Fundamentals of Computing
Robotics & Autonomy
EPSRC Industrial Sector Classifications:
No relevance to Underpinning Sectors
Related Grants:
EP/L024942/1 EP/L024861/1
Panel History:
Panel DatePanel NameOutcome
09 Apr 2014 EPSRC ICT Responsive Mode - Apr 2014 Announced
Summary on Grant Application Form


Autonomy is surely a core theme of technology in the 21st century. Within 20 years, we expect to see fully autonomous vehicles, aircraft, robots, devices, swarms, and software, all of which will (and must) be able to make their own decisions without direct human intervention. The economic implications are enormous: for example, the global civil unmanned air- vehicle (UAV) market has been estimated to be £6B over the next 10 years, while the world-wide market for robotic systems is expected to exceed $50B by 2025.

This potential is both exciting and frightening. Exciting, in that this technology can allow us to develop systems and tackle tasks well beyond current possibilities. Frightening in that the control of these systems is now taken away from us. How do we know that they will work? How do we know that they are safe? And how can we trust them? All of these are impossible questions for current technology. We cannot say that such systems are safe, will not deliberately try to injure humans, and will always try their best to keep humans safe. Without such guarantees, these new technologies will neither be allowed by regulators nor accepted by the public.

Imagine that we have a generic architecture for autonomous systems such that the choices the system makes can be guaranteed? And these guarantees are backed by strong mathematical proof? If we have such an architecture, upon which our autonomous systems (be they robots, vehicles, or software) can be based, then we can indeed guarantee that our systems never intentionally act dangerously, will endeavour to be safe, and will - as far as possible - act in an ethical and trustworthy way. It is important to note that this is separate from the problem of how accurately the system understands its environment. Due to inaccuracy in modelling the real world, we cannot say that a system will be absolutely safe or will definitely achieve something; instead we can say that it tries to be safe and decides to carry out a task to its best ability. This distinction is crucial: we can only prove that the system never decides to do the wrong thing, we cannot guarantee that accidents will never happen. Consequently, we also need to make an autonomous system judge the quality of its understanding and require it to act taking this into account. We should also verify, by our methods, that the system's choices do not exacerbate any potential safety problems.

Our hypothesis is that by identifying and separating out the high-level decision-making component within autonomous systems, and providing comprehensive formal verification techniques for this, we can indeed directly tackle questions of safety, ethics, legality and reliability. In this project, we build on internationally leading work on agent verification (Fisher), control and learning (Veres), safety and ethics (Winfield), and practical autonomous systems (Veres, Winfield) to advance the underlying verification techniques and so develop a framework allowing us to tackle questions such as the above. In developing autonomous systems for complex and unknown environments, being able to answer such questions is crucial.

Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.liv.ac.uk