EPSRC logo

Details of Grant 

EPSRC Reference: EP/V026801/1
Title: UKRI Trustworthy Autonomous Systems Node in Verifiability
Principal Investigator: Mousavi, Professor M
Other Investigators:
Brown, Professor G Woodcock, Professor JCP Hierons, Professor R
Cavalcanti, Professor ALC Rojas Siles, Dr J Ringert, Dr J O
Kaddouh, Dr B Kefalidou, Dr G Tyukin, Professor IY
Law, Professor LE Fisher, Professor M Dennis, Dr L
Richardson, Professor R Zhou, Dr C Turker, Dr UC
Researcher Co-Investigators:
Project Partners:
Amazon Web Services (Not UK) BAE Systems Bloc Digital
BT ClearSy Connected Places Catapult
Consequential Robotics Ltd Cyberselves Universal Limited D-RisQ Ltd
Guidance Automation Ltd Scoutek Ltd Sheffield Childrens NHS Foundation Trust
The Shadow Robot Company
Department: Computer Science
Organisation: University of Leicester
Scheme: Standard Research
Starts: 01 November 2020 Ends: 15 August 2021 Value (£): 2,923,653
EPSRC Research Topic Classifications:
Software Engineering
EPSRC Industrial Sector Classifications:
No relevance to Underpinning Sectors
Related Grants:
Panel History:
Panel DatePanel NameOutcome
15 Sep 2020 Trustworthy Autonomous System Nodes Interview Panel B Announced
Summary on Grant Application Form
Autonomous systems promise to improve our lives;

driverless trains and robotic cleaners are examples of autonomous systems that are already among us and

work well within confined environments.

It is time we work to ensure developers can design trustworthy autonomous systems for dynamic environments and

provide evidence of their trustworthiness.

Due to the complexity of autonomous systems, typically involving AI

components, low-level hardware control, and sophisticated interactions with

humans and the uncertain environment, evidence of any nature requires efforts from a

variety of disciplines. To tackle this challenge, we gathered consortium of

experts on AI, robotics, human-computer interaction, systems and software

engineering, and testing. Together, we will establish the foundations and

techniques for verification of properties of autonomous systems to inform

designs, provide evidence of key properties, and guide monitoring

after deployment.

Currently, verifiability is hampered by several issues: difficulties to

understand how evidence provided by techniques that focus on individual

aspects of a system (control engineering, AI, or human

interaction, for example) compose to provide evidence for the system as

whole; difficulties of communication between stakeholders that use different

languages and practices in their disciplines; difficulties in dealing with

advanced concepts in AI, control and hardware design, software for critical

systems; and others. As a consequence, autonomous systems are often developed

using advanced engineering techniques, but outdated approaches to

verification.

We propose a creative programme of work that will enable fundamental changes

to the current state of the art and of practice. We will define a

mathematical framework that enables a common understanding of the diverse

practices and concepts involved in verification of autonomy. Our framework

will provide the mathematical underpinning, required by any engineering

effort, to accommodate the notations used by the various disciplines. With

this common understanding, we will justify translations between languages,

compositions of artefacts (engineering models, tests, simulations, and so on)

defined in different languages, and system-level inferences from

verifications of components.

With such a rich foundation and wealth of results, we will

transform the state of practice. Currently, developers build systems from

scratch, or reusing components without any evidence of their

operational conditions. Resulting systems are deployed in constrained

conditions (reduced speed or contained environment, for example) or offered

for deployment at the user's own risk. Instead, we envisage the future

availability of a store of verified autonomous systems and components.

In such a future, in the store, users will find not just system

implementations, but also evidence of their operational conditions and

expected behaviour (engineering models, mathematical results, tests, and so

on). When a developer checks in a product, the store will require all these

artefacts, described in well understood languages, and will automatically

verify the evidence of trustworthiness. Developers will also be able to check

in components for other developers; equally, they will be accompanied by

evidence required to permit confidence in their use.

In this changed world, users will buy applications with clear guarantees of

their operational requirements and profile. Users will also be able to ask

for verification of adequacy for customised platforms and environment, for

example. Verification is no longer an issue.

Working with the EPSRC TAS Hub and other nodes, and our extensive range of

academic and industrial partners, we will collaborate to ensure that the

notations, verification techniques, and properties, that we consider,

contribute to our common agenda to bring autonomy to our everyday lives.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.le.ac.uk