Anna Scaife, University of Manchester, £1,458,618
AI4Astro: AI for Discovery in Data Intensive Astrophysics
The data rates from modern scientific facilities are increasing and it is no longer possible for scientists to extract scientifically valuable information by hand. This is particularly important for the Square Kilometre Array (SKA) telescopes. Consequently, in this era of big data astrophysics using machine learning to extract scientific information is essential to realise a timely scientific return from facilities such as the SKA. This research will consider how existing techniques can be adapted to achieve the key scientific goals of the SKA telescope. It will target the development of new machine learning approaches which address key aspects of SKA scientific processing.
Maria Liakata, University of Warwick, £1,227,974
Creating time sensitive sensors from language & heterogeneous user generated content
Widespread use of digital technology has made it possible to obtain language data (social media, SMS) as well as heterogeneous data (mobile phone use, sensors). Such data can provide useful behavioural cues on an individual level and for the wider population, enabling the creation of longitudinal digital phenotypes. Current methods in natural language processing (NLP) are not well suited to time sensitive, sparse and missing data or personalised models of language use. The proposed research will address specific challenges within NLP, will have an application to AI and mental health, and outputs will include novel tools for personalised monitoring behaviour through language use and user generated content over time.
Neil Lawrence, University of Cambridge, £2,380,212
Innovation to Deployment: Machine Learning Systems Design
The AI systems this project is developing and deploying are based on interconnected machine learning components. This research focuses on AI-assisted design and monitoring of these systems to ensure they perform robustly, safely and accurately in their deployed environment. This project addresses the entire pipeline of AI system development, from data acquisition to decision making, and proposes an ecosystem that includes system monitoring for performance, interpretability and fairness. This project places these ideas in a wider context that also considers the availability, quality and ethics of data.
Timothy Dodwell, University of Exeter, £1,336,283
Intelligent Virtual Test Pyramids for High Value Manufacturing
There is a paradox in aerospace manufacturing. The aim is to design an aircraft that has a very small probability of failing. Yet to remain commercially viable, a manufacturer can afford only a few tests of the fully assembled plane. How can engineers confidently predict the failure of a low-probability event? This research will develop novel, unified AI methods that intelligently fuse models and data enabling industry to slash conservatism in engineering design, leading to faster, lighter, more sustainable aircraft.
Yarin Gal, University of Oxford, £1,379,408
Democratizing Safe and Robust AI through Public Challenges in Bayesian Deep Learning
Probabilistic approaches to deep learning AI, such as Bayesian Deep Learning, are in use in industry and academia. In medical applications they are used to solve problems of AI safety and robustness. But major obstacles stand in the way of widespread adoption. This project proposes building new AI challenges to assess safety and robustness, derived from applications of AI in industry. The challenges identified will set the course for a community-driven effort leading to a self-sustained ecosystem and will bridge practitioners and AI researchers. This will offer new research opportunities for the AI community, helping to develop new safe and robust AI tools. Democratising safe and robust AI aligns with the UK's strategic plan set by Hall and Presenti and will help put the UK at the forefront of AI globally.
|