Artificial Intelligence (AI) is a multi-billion pound industry, which is estimated to deliver a 10% increase in UK GDP in 2030, and which has the potential to revolutionise how we live, work and behave.
AI is enabling a wide range of autonomous capabilities, from machine perception for remote inspection, to planning and decision-making in driverless cars, to routine robotic surgery. However, the domains and contexts where the impact of these technologies could be most profound, including the NHS, transport and national security, are dynamic and evolving, where AI system failures can cause significant harm. The UK Government is clear that safety must be paramount if the benefits of adopting AI technologies are to be unlocked. As AI and its applications continue to advance, society urgently needs professionals with the skills and knowledge to assure the safety of AI-enabled autonomous systems (AI-AS) in their real-world contexts of use.
The UKRI Centre for Doctoral Training (CDT) in Lifelong Safety Assurance of AI-enabled Autonomous Systems (SAINTS) will train 60 highly-skilled professionals, from diverse disciplines (Computer Science, Philosophy, Law, Sociology and Economics), to advance the safety assurance of AI-AS. Safety assurance concerns the actions, arguments and evidence which justify confidence that systems are acceptably safe in their operating contexts. The University of York is the world-leader in this field, and has been pioneering work in assurance of AI-AS safety, working across the disciplinary spectrum. Research within SAINTS will address the following two research themes:
(1) Lifelong safety of AI-AS: safety-driven design and training for evolving contexts; testing for open and uncertain operating environments; safe retraining and continual learning; proactive monitoring procedures and dynamic safety cases; ongoing assurance of societal and ethical acceptability.
(2) Safety of increasingly autonomous AI-AS: understanding Human-AI interaction to design safe joint cognitive systems; the assurance of safe transition between human and AI-AS control; achieving effective human oversight and AI-AS explainability; preserving human autonomy and responsibility.
Safety assurance is an inherently multidisciplinary field. Whatever their disciplinary background, all future leaders in AI-AS safety will need to understand, and work within, the wider technical, ethical, legal and societal context into which the systems are deployed. The CDT training programme therefore includes expert teaching in AI, safety, philosophical ethics, law and sociology, underpinned by ongoing training in Responsible AI.
Students will pursue their doctoral research within multidisciplinary research teams, which focus on 'grand challenges' that align with the CDT's two research themes, such as the safety of human-AI teaming and the safety of AI-enabled mobile autonomous systems. Work will be grounded in use cases, co-designed with the CDT's industrial, regulatory and public sector partners, and with involvement from members of a Public Panel, to ensure the research is developed responsibly and is responsive to stakeholder needs. This will equip students with the capacities and skills to move seamlessly from doctoral research to AI-AS safety roles in industry, regulation, and the public sector, as well as to postdoctoral fellowships. This cohort-based approach is a step change in AI-AS safety. By enabling peer-to-peer learning across disciplines and with external partners, it will build resilience in the evidence base for AI-AS safety.
SAINTS will be located in the flagship Institute for Safe Autonomy at the University of York, the world's first facility dedicated to safe autonomy. The CDT is ideally placed to create and sustain the next generation of experts and a lasting community of professionals who will pioneer a new generation of evidence-based policy and practices for safe AI-AS.
|