EPSRC logo

Details of Grant 

EPSRC Reference: EP/X040461/1
Title: Optimisation for Game Theory and Machine Learning
Principal Investigator: Goldberg, Professor P
Other Investigators:
Researcher Co-Investigators:
Project Partners:
Department: Computer Science
Organisation: University of Oxford
Scheme: Standard Research
Starts: 01 February 2024 Ends: 31 January 2027 Value (£): 624,351
EPSRC Research Topic Classifications:
Fundamentals of Computing
EPSRC Industrial Sector Classifications:
No relevance to Underpinning Sectors
Related Grants:
Panel History:
Panel DatePanel NameOutcome
03 Jul 2023 EPSRC ICT Prioritisation Panel July 2023 Announced
Summary on Grant Application Form
The project lies in the general area of mathematical analysis of algorithms, and computational complexity. Thus it focuses on provable performance guarantees of algorithms, and the fundamental limits to solvability of certain computational problems. This line of work is important in developing our understanding of why and how the associated problems are solved.

The problems of interest to this project consist of various related problems in optimisation (both continuous and discrete), some of which arise in Algorithmic Game Theory, and some arising in machine learning and neural networks. For example, the process of training a neural network involves searching for values of the weights of the neural network that minimise its disagreement with a data set. This usually uses some version of gradient descent, and is thus treated as a problem of continuous local optimisation. A widespread observation is that the efficacy of such learning systems is poorly understood: both the predictive power of the system, and the ability (in practice) of the local optimsation of find a solution reasonably quickly, deserve to be better understood. Multiple solutions may exist, and we address questions about the trade-off between quality of solution, and how hard it is to find. "Generative Adversarial Networks" have attracted widespread interest recently; these neural networks model the problem of learning a probability distribution, as a zero-sum game. This in turn leads to new "minimax" optimisation problems, and questions about how efficiently they can be solved.

The resulting optimisation problems are diverse, and include problems of discrete optimisation (in the case of clustering) and multi-objective optimisation is the case of generative adversarial networks, for example. But their complexity-theoretic analysis is a unifying theme, using tools from the study of search problems for which efficiently-checkable solutions are guaranteed to exist. The project aims to advance the theory of hard problems in this domain, which in turn assists with the explanation of efficaceous algorithms, via an understanding of what features of problems in practice they exploit. We are also interested in designing novel algorithms, or novel refinements of existing algorithms, having better performance guarantees than pre-existing ones. For example, this might build on "optimistic gradient descent" which has been shown to have desirable properties in some scenarios of multi-objective optimisation.

Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.ox.ac.uk