EPSRC logo

Details of Grant 

EPSRC Reference: EP/Y018516/1
Title: Sample Size guidance for developing and validating reliable and fair AI PREDICTion models in healthcare (SS-PREDICT)
Principal Investigator: Riley, Professor R
Other Investigators:
Collins, Professor G Denniston, Professor A Ensor, Dr J
Dhiman, Dr P Alderman, Dr J Nirantharakumar, Professor K
Snell, Dr K Cazier, Professor J Archer, Miss L
Researcher Co-Investigators:
Project Partners:
AstraZeneca University Hospitals Birmingham NHS FT University of Utrecht
Department: Institute of Applied Health Research
Organisation: University of Birmingham
Scheme: Standard Research - NR1
Starts: 02 October 2023 Ends: 01 April 2025 Value (£): 543,399
EPSRC Research Topic Classifications:
Artificial Intelligence
EPSRC Industrial Sector Classifications:
Healthcare
Related Grants:
Panel History:
Panel DatePanel NameOutcome
11 Jul 2023 Artificial intelligence innovation to accelerate health research Expert Panel Announced
08 Jun 2023 Artificial intelligence innovation to accelerate health research Sift Panel B Announced
Summary on Grant Application Form
Healthcare research is in an exciting new phase, with increasing access to information to link an individual's characteristics (such as age, family history or genetic information) with health outcomes (such as death, pain level, depression score). Researchers are using this information alongside artificial intelligence (AI) methods to help health professionals and patients predict future outcomes, to better personalise treatment, improve quality of life, and to prolong life. For example, QRISK is used by doctors to calculate an individual's risk of heart disease within the next 10 years, and to guide who needs a treatment (e.g., statins) to reduce their risk of heart disease occurring. Such prediction tools are known as 'clinical prediction models' or 'clinical prediction algorithms'.

Thousands of clinical prediction model studies are published each year, but unfortunately most are not fit for purpose because they give inaccurate predictions. For example, some individuals predicted low risk may actually be high risk of adverse outcomes, and vice versa. Such inaccurate prediction models may lead to harm and represents research waste, where money spent on research does not lead to improvements in healthcare or patient outcomes. A major reason for poor prediction models is that they are developed using a sample of data that is too small, for example in terms of the total number of participants contributing data and the number of outcome events (e.g., deaths) observed therein.

To address this, in this project we will provide guidance and new methods to enable researchers to calculate the sample size needed to develop a reliable prediction model, for their particular health condition (e.g., heart disease), prediction outcome (e.g., death) and setting (e.g., general practice) of interest. With the new guidance our project provides, researchers will know how large their data needs to be to reliably develop an AI prediction model and precisely demonstrate its accuracy in the population (e.g. UK) and key subgroups (e.g. different ethnic groups).

The project will be split into two topic areas: (i) sample size for model development, and (ii) sample size for testing a model's accuracy, also known as model evaluation or model validation. In the first, we will focus on the accuracy of different AI approaches to developing a model, including statistical methods and so-called 'machine learning' approaches, and tailor sample size guidance for each approach using mathematical and computer-based results. In the second, we will focus on testing (evaluating) the accuracy of an AI model and derive mathematical solutions that calculate the sample size needed to precisely estimate a range of accuracy measures relevant to researchers, health professionals and patients, both in the overall population and in subgroups where fairness checks are essential.

The project findings will accelerate the production and identification of reliable and fair AI prediction models for use in healthcare. It will also provide quality standards for researchers to adhere to when developing and validating new AI models, and allow regulators (those deciding what models should be used and how) to distinguish between models that are reliable and fair, and models that are unreliable and potentially harmful.

The work will be disseminated through computer software and web apps; publications in academic journals; dedicated workshops (with AI researchers and patient groups) and training courses to educate those working in or using prediction model research; and blogs, social media and tutorial videos at websites including www.prognosisresearch.com and YouTube to target the international academic community and a broad audience including patients and the public.

Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.bham.ac.uk