EPSRC Reference: 
EP/X040461/1 
Title: 
Optimisation for Game Theory and Machine Learning 
Principal Investigator: 
Goldberg, Professor P 
Other Investigators: 

Researcher CoInvestigators: 

Project Partners: 

Department: 
Computer Science 
Organisation: 
University of Oxford 
Scheme: 
Standard Research 
Starts: 
01 December 2023 
Ends: 
30 November 2026 
Value (£): 
624,351

EPSRC Research Topic Classifications: 
Fundamentals of Computing 


EPSRC Industrial Sector Classifications: 
No relevance to Underpinning Sectors 


Related Grants: 

Panel History: 
Panel Date  Panel Name  Outcome 
03 Jul 2023

EPSRC ICT Prioritisation Panel July 2023

Announced


Summary on Grant Application Form 
The project lies in the general area of mathematical analysis of algorithms, and computational complexity. Thus it focuses on provable performance guarantees of algorithms, and the fundamental limits to solvability of certain computational problems. This line of work is important in developing our understanding of why and how the associated problems are solved.
The problems of interest to this project consist of various related problems in optimisation (both continuous and discrete), some of which arise in Algorithmic Game Theory, and some arising in machine learning and neural networks. For example, the process of training a neural network involves searching for values of the weights of the neural network that minimise its disagreement with a data set. This usually uses some version of gradient descent, and is thus treated as a problem of continuous local optimisation. A widespread observation is that the efficacy of such learning systems is poorly understood: both the predictive power of the system, and the ability (in practice) of the local optimsation of find a solution reasonably quickly, deserve to be better understood. Multiple solutions may exist, and we address questions about the tradeoff between quality of solution, and how hard it is to find. "Generative Adversarial Networks" have attracted widespread interest recently; these neural networks model the problem of learning a probability distribution, as a zerosum game. This in turn leads to new "minimax" optimisation problems, and questions about how efficiently they can be solved.
The resulting optimisation problems are diverse, and include problems of discrete optimisation (in the case of clustering) and multiobjective optimisation is the case of generative adversarial networks, for example. But their complexitytheoretic analysis is a unifying theme, using tools from the study of search problems for which efficientlycheckable solutions are guaranteed to exist. The project aims to advance the theory of hard problems in this domain, which in turn assists with the explanation of efficaceous algorithms, via an understanding of what features of problems in practice they exploit. We are also interested in designing novel algorithms, or novel refinements of existing algorithms, having better performance guarantees than preexisting ones. For example, this might build on "optimistic gradient descent" which has been shown to have desirable properties in some scenarios of multiobjective optimisation.

Key Findings 
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk

Potential use in nonacademic contexts 
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk

Impacts 
Description 
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk 
Summary 

Date Materialised 


Sectors submitted by the Researcher 
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk

Project URL: 

Further Information: 

Organisation Website: 
http://www.ox.ac.uk 