AI technologies have the potential to unlock significant growth for the UK financial services sector through novel personalised products and services, improved cost-efficiency, increased consumer confidence, and more effective management of financial, systemic, and security risks. However, there are currently significant barriers to adoption of these technologies, which stem from a capability deficit in translating high-level principles (of which there is an abundance) concerning trustworthy design, development and deployment of AI technologies ("trustworthy AI"), including safety, fairness, privacy-awareness, security, transparency, accountability, robustness and resilience, to concrete engineering, governance, and commercial practice.
In developing an actionable framework for trustworthy AI, the major research challenge that needs to be overcome lies in resolving the tensions and tradeoffs which inevitably arise between all these aspects when considering specific application settings.For example, reducing systemic risk may require data sharing that creates security risks; testing algorithms for fairness may require gathering more sensitive personal data; increasing the accuracy of predictive models may pose threats to fair treatment of customers; improved transparency may open systems up to being "gamed" by adversarial actors, creating vulnerabilities to system-wide risks.
This comes with a business challenge to match. Financial service providers that are adopting AI approaches will experience a profound transformation in key areas of business as customer engagement, risk, decisioning, compliance and other functions transition to largely data-driven and algorithmically mediated processes that involve less and less human oversight. Yet, adapting current innovation, governance, partnership and stakeholder relation management practice in response to these changes can only be successfully achieved once assurances can be confidently given regarding the trustworthiness of target AI applications.
Our research hypothesis is based on recognising the close interplay between these research and business challenges: Notions of trustworthiness in AI can only be operationalised sufficiently to provide necessary assurances in a concrete business setting that generates specific requirements to drive fundamental research into practical solutions, with solutions which balance all of these potentially conflicting requirements simultaneously.
Recognising the importance of close industry-academia collaboration to enable responsible innovation in this area, the partnership will embark on a systematic programme of industrially-driven interdisciplinary research, building on the strength of the existing Turing-HSBC partnership.
It will achieve a step change in terms of the ability of financial service providers to enable trustworthy data-driven decision making while enhancing their resilience, accountability and operational robustness using AI by improving our understanding of sequential data-driven decision making, privacy- and security- enhancing technologies, methods to balance ethical, commercial, and regulatory requirements, the connection between micro- and macro-level risk, validation and certification methods for AI models, and synthetic data generation.
To help drive innovation across the industry in a safe way which will help establish the appropriate regulatory and governance framework, and a common "sandbox" environment to enable experimentation with emerging solutions and to test their viability in a real-world business context. This will also provide the cornerstone for impact anticipation and continual stakeholder engagement in the spirit of responsible research and innovation.
|